text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Some $m$-Fold Symmetric Bi-Univalent Function Classes and Their Associated Taylor-Maclaurin Coefficient Bounds
The Ruscheweyh derivative operator is used in this paper to introduce and investigate interesting general subclasses of the function class $\Sigma_{\mathrm{m}}$ of $m$-fold symmetric bi-univalent analytic functions. Estimates of the initial Taylor-Maclaurin coefficients $\left|a_{m+1}\right|$ and $\left|a_{2 m+1}\right|$ are obtained for functions of the subclasses introduced in this study, and the consequences of the results are discussed. The results presented would generalize and improve on some recent works by many earlier authors. In some cases, our estimates are better than the existing coefficient bounds. Furthermore, within the engineering domain, this paper delves into a series of complex issues related to analytic functions, $m$-fold symmetric univalent functions, and the utilization of the Ruscheweyh derivative operator. These problems encompass a broad spectrum of engineering applications, including the optimization of optical system designs, signal processing for antenna arrays, image compression techniques, and filter design for control systems. The paper underscores the crucial role of these mathematical concepts in addressing practical engineering dilemmas and fine-tuning the performance of various engineering systems. It emphasizes the potential for innovative solutions that can significantly enhance the reliability and effectiveness of engineering applications.
Introduction
Let A denote the class of the functions f that are analytic in the open unit disk U = {z ∈ C : |z| < 1}, normalized by the conditions f (0) = f (0) -1 = 0 of the Taylor-Maclaurin series expansion Assume that S is the subclass of A that contains all univalent functions in U of the form (1.1), and P is the subclass of all functions h(z) of the form which is analytic in the open unit disk U and Re(h(z)) > 0, z ∈ U.
The Koebe 1/4-theorem [2] asserts that every univalent function f ∈ S has an inverse f -1 defined by The inverse function g = f -1 has the form g(w) = f -1 (w) = wa 2 w 2 + 2a 2 2a 3 w 3 -5a 3 2 -5a 2 a 3 + a 4 w 4 + • • • . (1.3) A function f ∈ A is said to be bi-univalent if both f and f -1 are univalent.The class of bi-univalent functions in U is denoted by .The following are some examples of functions in the class : with the corresponding inverse functions: e w and e 2w -1 e 2w + 1 , respectively.
Estimates on the bounds of the Taylor-Maclaurin coefficients |a n | are an important concern problem in geometric function theory because they provides information about the geometric properties of these functions.Lewin [3] studied the class of bi-univalent functions and discovered that |a 2 | < 1.51 for the functions belonging to the class .Later on, Brannan and Clunie [4] conjectured that |a 2 | √ 2. Subsequently, Netanyahu [5] showed that max |a 2 | = 4/3 for f ∈ .Recently, many works have appeared devoted to studying the bi-univalent functions class and obtaining non-sharp bounds on the Taylor-Maclaurin coefficients |a 2 | and |a 3 |.In fact, in their pioneering work, Srivastava et al. [6] have revived and significantly improved the study of the analytic and bi-univalent function class in recent years.They also discovered bounds on |a 2 | and |a 3 | and were followed by such authors (see, for example, [7][8][9][10][11][12][13][14] and references therein).The coefficient estimates on the bounds of |a n | (n ∈ {4, 5, 6, . ..}) for a function f ∈ defined by (1.1) remains an unsolved problem.In fact, for coefficients greater than three, there is no natural way to obtain an upper bound.There are a few articles where the Faber polynomial techniques were used to find upper bounds for higher-order coefficients (see, for example, [15][16][17][18]).
For each function f ∈ S, the function is univalent and maps the unit disk into a region with m-fold symmetry.A function f is said to be m-fold symmetric (see [19]) and is denoted by A m if it has the following normalized form: Assume that S m denotes the class of m-fold symmetric univalent functions in U that are normalized by the series expansion (1.5).In fact, the functions in class S are 1-fold symmetric.According to Koepf [19], the m-fold symmetric function h ∈ P has the form Analogous to the concept of m-fold symmetric univalent functions, Srivastava et al. [20] defined the concept of m-fold symmetric bi-univalent function in a direct way.Each function f ∈ generates an m-fold symmetric bi-univalent function for each m ∈ N. The normalized form of f is given as (1.5), and the extension g = f -1 is given by as follows: We denote the class of m-fold symmetric bi-univalent functions in U by m .For m = 1, the series (1.7) coincides with the series expansion (1.3) of the class .Following are some examples of m-fold symmetric bi-univalent functions: , with the corresponding inverse functions: For a function f ∈ A m defined by (1.5), one can think of the m-fold Ruscheweyh derivative operator R δ : A m → A m , which is analogous to the Ruscheweyh derivative R δ : A → A and can define as follows: In engineering, optimizing optical systems and designing effective control systems pose enormous challenges.Describing complex wavefronts necessitates the use of analytic and univalent functions tailored to specific optical constraints, while in signal processing for antenna arrays, employing m-fold symmetric univalent functions is crucial for beamforming amidst electromagnetic wave complexities, demanding innovation and precision.Control systems engineering utilizes univalent functions for filter design, where achieving the desired frequency response must align with system stability and minimal phase distortion, posing a continual challenge.Additionally, modeling complex mechanical systems requires leveraging the Ruscheweyh derivative operator to analyze functions representing system dynamics, facilitating critical parameter identification for system performance optimization.In robotics, univalent functions aid in controlling manipulators while navigating constraints related to joint angles and velocities.Moreover, in image compression and transmission for communication systems, the use of m-fold symmetric bi-univalent functions offers the potential for optimizing compression ratios while preserving image quality, representing an ongoing engineering challenge (see, for example, [25,26]).
This paper aims to introduce new general subclasses of m-fold symmetric bi-univalent functions in U applying the m-fold Ruscheweyh derivative operator, obtain estimates on initial coefficients |a m+1 | and |a 2m+1 | for functions in subclasses Q m (τ , λ, γ , δ; α) and m (τ , λ, γ , δ; β), and improve many recent works.Moreover, we have derived the Fekete-Szegö inequalities for these classes.To derive our main results, we need to use the following lemmas that will be useful in proving the basic theorems in Sects. 2 and 3.
Lemma 1 [2] If h ∈ P with h(z) given by (1.2), then Lemma 2 [27] If h ∈ P with h(z) given by (1.2) and μ is a complex number, then
Coefficient bounds for the function class
In this section, we assume that For a function h ∈ P given by (1.2).If K(z) is any complex-valued function such that satisfies the following conditions: and where z, w ∈ U and the function g = f -1 is given by (1.7).
.24)
Finally, taking the absolute value of (2.24) and applying Lemma 1 once again for the coefficients p m , p 2m , q m , and q 2m , we deduce that This completes the proof.
Coefficient bounds for the function class m (τ , λ, γ , δ; β)
In this section, we assume that Definition 2 A function f ∈ m given by (1.5) is called in the class m (τ , λ, γ , δ; β) if it satisfies the following conditions: and where z, w ∈ U and the function g = f -1 is given by (1.7).
Proof It follows from (3.1) and (3.2) that and where p(z) and q(w) have the forms (2.9) and (2.10), respectively.Clearly, we have and Equating the corresponding coefficients of (3.5) and (3.6) yields (1 + 2m(λ + γ ) + λγ ((2m + 1) 2 + 1) and In view of (3.9) and (3.11), we find that and Adding (3.10) to (3.12), we obtain Hence, we find from (3.14) and (3.15) that and respectively.By taking the absolute value of (3.16) and (3.17) and applying Lemma 1 for the coefficients p m , p 2m , q m , and q 2m , we deduce that and 2 , respectively.To determine the bound on |a 2m+1 |, by subtracting (3.12) from (3.10), we get Upon substituting the value of a 2 m+1 from (3.16) and (3.17) into (3.18),we conclude that and Now, taking the absolute value of (3.19) and (3.20) and applying Lemma 1 once again for the coefficients p m , p 2m , q m , and q 2m , we deduce that and respectively.This completes the proof.
Corollaries and consequences
This section is devoted to demonstrating of some special cases of the definitions and theorems.These results are given in the form of remarks and corollaries.
For 1-fold symmetric bi-univalent functions, Theorem 1 reduces to the following corollary: .
By taking δ = 0 in Corollary 6, we have the following result. | 2,143.4 | 2023-04-23T00:00:00.000 | [
"Mathematics"
] |
Convergence Theorems for the Variational Inequality Problems and Split Feasibility Problems in Hilbert Spaces
In this paper, we establish an iterative algorithm by combining Yamada’s hybrid steepest descent method and Wang’s algorithm for finding the common solutions of variational inequality problems and split feasibility problems. *e strong convergence of the sequence generated by our suggested iterative algorithm to such a common solution is proved in the setting of Hilbert spaces under some suitable assumptions imposed on the parameters. Moreover, we propose iterative algorithms for finding the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.
Introduction
In 2005, Censor et al. [1] introduced the multiple-sets split feasibility problem (MSSFP), which is formulated as follows: where C i (i � 1, 2, . . . , N) and Q j (j � 1, 2, . . . , M) are nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively, and A: H 1 ⟶ H 2 is a bounded linear mapping. Denote by Ω the set of solutions of MSSFP (1). Many iterative algorithms have been developed to solve the MSSFP (see [1][2][3]). Moreover, it arises in many fields in the real world, such as inverse problem of intensity-modulated radiation therapy, image reconstruction, and signal processing (see [1,4,5] and the references therein). When N � M � 1, the MSSFP is known as the split feasibility problem (SFP); it was first introduced by Censor and Elfving [5], which is formulated as follows: Denote by Γ the set of solutions of SFP (2). Assume that the SFP is consistent (i.e., (2) has a solution). It is well known that x ∈ C solves (2) if and only if it solves the fixed point equation where c is a positive constant, A * is the adjoint operator of A, and P C and P Q are the metric projections of H 1 and H 2 onto C and Q, respectively (for more details, see [6]). e variational inequality problem (VIP) was introduced by Stampacchia [7], which is finding a point x * ∈ C such that 〈F x * , x − x * 〉 ≥ 0, for all x ∈ C, (4) where C is a nonempty closed convex subset of a Hilbert space H and F: C ⟶ H is a mapping. e ideas of the VIP are being applied in many fields including mechanics, nonlinear programming, game theory, and economic equilibrium (see [8][9][10][11][12]).
In [13], we see that x ∈ C solves (4) if and only if it solves the fixed point equation Moreover, it is well known that if F is k-Lipschitz continuous and η-strongly monotone, then VIP (4) has a unique solution (see, e.g., [14]).
Since SFP and VIP include some special cases (see [15,16]), indeed, convex linear inverse problem and split equality problem are special cases of SFP, and zero point problem and minimization problem are special cases of VIP. Jung [17] studied the common solution of variational inequality problem and split feasibility problem: find a point where Γ is the solution set of SFP (2) and F: H 1 ⟶ H 1 is an η-strongly monotone and k-Lipschitz continuous mapping. After that, for solving problem (6), Buong [2] considered the following algorithms, which were proposed in [14,18], respectively: x n+1 � α n x n + 1 − α n I − t n μF Tx n , n ≥ 0, (8) where T � P C (I − cA * (I − P Q )A), and under the following conditions: (C1) t n ∈ (0, 1), t n ⟶ 0 as n ⟶ ∞ and ∞ n�1 t n � ∞. (C2) 0 < liminf n⟶∞ α n ≤ limsup n⟶∞ α n < 1.
Moreover, Buong [2] considered the sequence x n that is generated by the following algorithm, which is weakly convergent to a solution of MSSFP (1): where P 1 � P C 1 , . . . , P C N and P 2 � P Q 1 , . . . , P Q M or P 1 � N i�1 α i P C i and P 2 � M j�1 β j P Q j in which α i and β j , for 1 ≤ i ≤ N and 1 ≤ j ≤ M, are positive real numbers such that N i�1 α i � M j�1 β j � 1. Motivated by the aforementioned works, we establish an iterative algorithm by combining algorithms (7) and (8) for finding the solution of problem (6) and prove the strong convergence of the sequence generated by our iterative algorithm to the solution of problem (6) in the setting of Hilbert spaces. Moreover, we propose iterative algorithms for solving the common solutions of variational inequality problems and multiple-sets split feasibility problems. Finally, we also give numerical examples for illustrating our algorithms.
Preliminaries
In order to solve our results, we now recall the following definitions and preliminary results that will be used in the sequel. roughout this section, let C be a nonempty closed convex subset of a real Hilbert space H with inner product 〈·, ·〉 and norm ‖ · ‖.
In [5], we know that the metric projection P C : H ⟶ C is firmly nonexpansive and (1/2)-averaged.
We collect some basic properties of averaged mappings in the following results.
Lemma 1 (see [16]). We have (i) e composite of finitely many averaged mappings is averaged.
are averaged and have a common fixed point, then Proposition 1 (see [19]). Let D be a nonempty subset of H, m ≥ 2 be an integer, and ϕ; (0, 1) m ⟶ (0, 1) be defined by e following properties of the nonexpansive mappings are very convenient and helpful to use.
Lemma 2 (see [20] Proposition 2 (see [19]). Let C be a nonempty subset of H, and let T i i∈I be a finite family of nonexpansive mappings from C to H. Assume that α i i∈I ⊂ (0, 1) and δ i i∈I ⊂ (0, 1] such that i∈I δ i � 1. Suppose that, for every i ∈ I, T i is α i -averaged; then, T � i∈I δ i T i is α-averaged, where α � i∈I δ i α i . e following results play a crucial role in the next section.
en, the sequence x n defined by the following algorithm converges strongly to the unique solution x * of the variational inequality (4): . . , N, and under the following conditions: Theorem 2 (see [22]).
be as in eorem 1. en, the sequence x n defined by the following algorithm: converges strongly to the unique solution x * of variational inequality (4).
Main Results
In this section, we consider the following iterative algorithm by combining Yamada's hybrid steepest descent method [14] and Wang's algorithm [18] for solving problem (6): y n � 1 − α n x n + α n I − t n μF Tx n , x n+1 � I − t n μF Ty n , ∀n ≥ 1, where T � P C (I − cA * (I − P Q )A). If we set α n � 0 for n ∈ N, then (15) is reduced to (7) studied by Buong [2]. On the other hand, in the Numerical Example section, we present the example illustrating that the two-step method (15) is more efficient that the one-step method (8) studied by Buong [2] and in terms of the two-step method (15) the generated sequence has the less number of iterations and converges faster than the sequence generated by the one-step method (8). roughout our results, unless otherwise stated, we assume that H 1 and H 2 are two real Hilbert spaces and A: H 1 ⟶ H 2 is a linear bounded mapping. Let F be an η-strongly monotone and k-Lipschitz continuous mapping on H 1 with some positive constants η and k. Assume that μ ∈ (0, 2η/k 2 ) is a fixed number. Theorem 3. Let C and Q be two closed convex subsets in H 1 and H 2 , respectively. en, as n ⟶ ∞, the sequence x n defined by (15), where the sequences t n and α n satisfy conditions (C1) and (C2), respectively, converges strongly to the solution of (6).
Since (1 − λ)I + λS and I − t n μF are nonexpansive, then (I − t n μF)T is also nonexpansive. erefore, the strong convergence of (15) to the element x * in the solution set of (6) follows by eorem 2.
In [23], Miao and Li showed the weak convergence results of the sequence x n converging to the element of Fix(T) where x n is generated by the following algorithm: y n � 1 − β n x n + β n I − t n μF Tx n , x n+1 � 1 − α n x n + α n I − t n μF Ty n , ∀n ≥ 1, which t n satisfies condition (C3) ∞ n�1 t n < + ∞. Next, we will show the strong convergence for (17) where t n satisfies condition (C1). □ Theorem 4. Let C and Q be two closed convex subsets in H 1 and H 2 , respectively. en, as n ⟶ ∞, the sequence x n defined by (17), where the sequence t n satisfies condition (C1) and β n and α n satisfy condition (C2), converges strongly to the solution of (6).
Since (I − t n μF)T is nonexpansive, then the strong convergence of (17) to the element x * in the solution set of (6) follows by eorem 1. Moreover, we obtain the following results which are solving the common solution of variational inequality problem and multiple-sets split feasibility problem, i.e., find a point x * ∈ Ω such that 〈Fx * , x − x * 〉 ≥ 0, for all x ∈ Ω, (19) where Ω is a solution set of (1), and F: H 1 ⟶ H 1 is an η-strongly monotone and k-Lipschitz continuous mapping.
is problem has been studied in [2]. H 1 and H 2 , respectively. Assume that c ∈ (0, 1/‖A‖ 2 ), t n and α n satisfy conditions (C1) and (C2), respectively, and the parameters δ n and ζ n satisfy the following conditions: en, as n ⟶ ∞, the sequence x n , defined by y n � 1 − α n x n + α n I − t n μF P 1 I − cA I − P 2 A x n , with one of the following cases: . . , P Q M , converges to the element x * in the solution set of (19).
Proof. Let T � P 1 (I − cA * (I − P 2 )A). We will show that T is averaged.
If P 1 � N i�1 δ i P C i and P 2 � M j�1 ζ j P Q j , then by using Proposition 2 and condition (a), we obtain that P 1 is (1/2)-averaged. From condition (b) and taking into account that P Q j is nonexpansive, for all j � 1, . . . , M, we have that P 2 is also nonexpansive. It follows from Lemma 2 that I − cA * (I − P 2 )A is c‖A‖ 2 -averaged. us, T is λ-averaged with λ � (1 + c‖A‖ 2 )/2.
Since (1 − λ)I + λS and I − t n μF are nonexpansive, then (I − t n μF)T is nonexpansive. us, the strong convergence of (20) to the element x * in the solution set of (19) follows by eorem 2.
, c, t n , δ n , and ζ n be as in eorem 5. en, as n ⟶ ∞, the sequence x n , defined by with one of the cases (A1)-(A4), converges strongly to an element in the solution set of (19).
Since (I − t n μF)T is nonexpansive, the strong convergence of (23) to the element x * in the solution set of (19) follows by eorem 1.
Numerical Example
In this section, we present the numerical example comparing algorithm (8) which is given by Buong [2] and algorithm (15) (new method) to solve the following test problem in [2]: find an element x * ∈ Ω such that , where φ is a convex function, having a strongly monotone and Lipschitz continuous derivative φ ′ (x) on the Euclidian a i k , b i ∈ (− ∞, +∞), for 1 ≤ k ≤ n and 1 ≤ i ≤ N, (1 − a). For each algorithm, we set a i � (1/i, − 1), b i � 0, for all i � 1, . . . , N, and a j � (1/j, 0), R j � 1, for all j � 1, . . . , M. Taking a � 0.5, c � 0.3, the stopping criterion is defined by E n � ‖x n+1 − x n ‖ < ε where ε � 10 − 4 and 10 − 6 . e numerical results are listed in Table 1 with different initial points x 1 , where n is the number of iterations and s is the CPU time in seconds. In Figures 1 and 2, we present the graphs illustrating the number of iterations for both methods using the stopping criterion defined as above with the different initial points shown in Table 1. , φ, a, and A be as in Example 1. In the numerical experiment, we take the stopping criterion E n < 10 − 4 . e numerical results are listed in Table 2 with different cases of P 1 and P 2 . In Figures 3 and 4, we present the graphs illustrating the number of iterations for all cases of P 1 and P 2 using the stopping criterion as above with the different initial points appeared in Table 2. Moreover, Table 3 shows the effect of different choices of c.
Remark 2. We observe from the numerical analysis of Table 2 that algorithm (23) has the fastest convergence when P 1 and P 2 satisfy (A4) and the slowest convergence when P 1 and P 2 satisfy (A3). Moreover, we require less iteration steps and CPU times for convergence when c is chosen very small and close to zero.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that there are no conflicts of interest. | 3,448.2 | 2021-01-01T00:00:00.000 | [
"Mathematics"
] |
The Effect of the Trace Elements Concentrations on the Cancerous and Healthy Tissues in Radiotherapy
The study is aimed to investigate the effect of the trace element concentrations in healthy and cancerous prostate tissues on dose distributions in radiotherapy. In this work, the trace element compounds completely soluble in the water were used and their concentrations given in the literature were mixed homogeneously with pure water. This is the first time study in literature as far as we know. The percent depth dose (PDD) measurements were performed using Elekta Synergy Platform Linac device for 6 and 18 MV photon energies. We also obtained the PDDs results by choosing higher trace element concentrations than given in literature in cancerous prostate tissue to see the effect on radiotherapy. The experimental measurements were compared with the results obtained from the GATE simulation code. The TPR20/10 was calculated for 10 × 10 cm field size at 6/18 MV energies photons and compared with simulation results. The differences between simulation and measurement for 6 MV and 18 MV photons are 1.75% and 1.82% respectively. The experimental results and simulations were presented an uncertainty lower than 3%. Simulated dose values are in good agreement with less than 2% differences with the experimental results. We see that the trace element concentrations of healthy and cancerous tissues did not affect the dose distribution at high-energy photons. This is expected and well known result. We believe that this in vitro study is important for proving the reliability of the dose given in radiotherapy treatment once again.
Introduction
Cancer is considered as one of the most deadly diseases and the leading causes of death worldwide [1]. Cancer treatment is performed by various methods such as surgery, chemotherapy, brachytherapy and radiotherapy or the application of these methods together. Approximately 50% of all cancer patients undergo external radiotherapy (ER) using photons [2] [3] [4]. Linear accelerators used in external beam radiation therapy treatments enable patients to be irradiated at different dose rates. The main aim of radiotherapy is to give the highest dose to the tumor and to give the lowest dose to the healthy tissue around the tumor. It is important to confirm the accuracy of the dose given to the patient during treatment planning by using the dose distribution algorithms. The particle transport calculations in the presence of an internal or external source, and the energy stored in the tissue can be determined by Monte Carlo (MC) calculations.
In many studies, MC simulation packages have been used for radiation dosimeter calculations; such as OMEGA [5], MCNP4C [6] [7], EGS [8] [9]. In this research, the experimental results were simulated with the open-source code called Geant4 Application for Emission Tomography (GATE) MC simulation based on the GEANT4 toolkit (http://www.opengatecollaboration.org/). Geant4 is a software toolkit for the simulation of the interaction of particles with matter, and its application areas include high-energy physics experiments, astrophysics and astroparticle physics, nuclear physics, space science, medical physics and medical imaging, radiation protection [10] [11]. GATE plays a key role in the simulation of the radiotherapy experiments, PET, SPECT studies and design of new medical imaging devices [12] [13]. Recent studies have emphasized the importance of trace elements in the investigation about the possible causes of cancer [14] [15] [16] [17]. Trace elements have a significant effect as a component of many enzymes in all biological systems [18]. Although trace elements constitute a minor part of living tissues, they are important for vital processes. Trace element levels were determined to have deficiency or excess concerning the normal values in some diseases including cancer [19] [20]. Trace elements have different concentrations in healthy and cancerous tissues due to biological changes induced by the disease. The concentrations of trace elements in healthy and cancerous tissues or fluids have been recently obtained by using many experimental techniques such as particle-induced X-ray emission (PIXE), X-ray fluorescence analysis (XRF) and its total reflection geometry method (TRXRF), atomic absorption spectrometry (AAS) and neutron activation analysis (NAA) [21] [22] [23] [24] [25].
The concentration of trace elements has a noticeable effect on dose distributions at brachytherapy treatment (an internal radiation) which is used the low energy photon sources [26] [27]. In brachytherapy, the dosimetric impact of trace elements has been studied for normal and cancerous tissues using low energy photon sources with Monte Carlo (MC) calculations [28]. The results showed that in the presence of trace elements, dose distributions varied depend-T. Sahmaran, A. Kaskas ing on the atomic number and fraction of the elements in tissue.
The effect of the tissue composition on dose distribution was also investigated by using electron beams in radiotherapy [29]. Ghorbani et al. [29] showed that differences in dose distribution were not significant in various soft tissues and tissue-equivalent materials. However, due to the differences in the composition of the materials, it has been proposed to be investigated the uncertainties in the calculations.
This study aimed to determine the effect of the trace elements concentrations on dose distribution in healthy and cancerous prostatic tissues in radiotherapy.
The trace elements concentrations given in the literature were mixed homogeneously inside the water phantom. Here, we used the trace element compounds completely soluble in the water. The experiment was performed for 6 MV photon beams of the Elekta Synergy Platform Linear Accelerator. The percent depth-dose distributions (PDD) of the concentration of each element were measured, and the results were compared with the simulations using GATE/GEANT4 code. Table 1. Measurements were carried out with an electrometer, CC13 and FC65P ion chambers (IBA Dosimetry, Nuremberg, Germany). Ion chambers were calibrated by Turkey Atomic Energy Agency Secondary Standard Dosimetry Laboratory (ISDL). tons. The source to surface distance was 100 cm and the irradiation field sizes were 10 × 10 cm 2 and 20 × 20 cm 2 . FC65P ion chamber was placed in a solid water phantom (RW3, IBA; Schwarzenbruck, Germany). Then, the water tank was located at the top of the solid water phantoms. Pure water and all chemical elements used in the measurements were obtained from Ankara University Chemistry Department Central Warehouse.
Percentage Depth Dose Measurements
Firstly, the tissue phantom ratio for depths of 20 and 10 cm (TPR 20/10 ) was obtained for 10 × 10 cm 2 field size to validate the simulation for 6 and 18 MV photon energies [30]. The beam quality is specified by TPR 20 [23] [31] [32] [33] [34]. The PDD measurements were at first performed with pure water, which is a tissue-equivalent material. The concentrations for each trace element were calculated according to water phantom volume.
The trace element concentrations of healthy and cancerous prostate tissues given in the literature were used to obtain a mixture of water [35]. Then, the PDD measurements were carried out for each trace element compound.
GATE Simulations
For dosimetry related applications in radiation therapy, the GATE v8. statistical uncertainty in any volume [10]. We determined the Standard physics list with option 3 for photons, e− and e+ [36].
Results and Discussion
First, the flatness and symmetry of the instrument were determined by using the IBA Blue Phantom 2 water phantom. In Linac, the flatness value should be less than 3% and the symmetry value should be less than 2%. These values can be obtained in the largest field size at 100 cm SSD and 10 cm depth [37]. TPR 20/10 was calculated for 10 × 10 cm 2 field size at 6/18 MV energies photons and compared with simulation results. The differences between simulation and measurement for 6 MV and 18 MV photons are 1.75% and 1.82% respectively, as shown by the values in Table 3 Table 4.
Teixeira et al. [38] created the phase space of the Novalis Classic linear accelerator at 6 MV energy performed the GATE simulation program and compared PDD and dose profiles with experimental data for 10 × 10 cm 2 and 3 × 3 cm 2 radiation field. They found that the TPR 20/10 difference between simulation and measurement was about 1.5% for 6 MV. In our study, the difference between simulation and experiment was calculated at 1.75%. In Figure 3 and Figure 4 the experimental measurements performed for pure water were compared with the simulated results at 6/18 MV photons, at 10 × 10 cm 2 and 20 × 20 cm 2 field sizes. The PDD distributions for the concentrations of trace elements in healthy and cancerous prostate tissues were measured by using the trace element mass fraction as indicated in Table 2. The same conditions were defined and then simulated in GATE. ICRU 24 [39] recommends that the uncertainty in the dose given in radiotherapy should not exceed ± 5%. In this study, the difference between measured and simulated results for the pure water measurement did not exceed ±2%.
Grevillot et al. [40] measured PDD and dose profiles at 6 MV photons by using Elekta Precise Linac device and simulated with GATE program. They also found the dose differences between simulation and measurements approximately Ghorbani et al. [29] studied the effect of the tissue composition on dose distribution for 8/12/14 MeV electron beams in radiotherapy. The various soft tissues and tissue-equivalent materials were simulated using MCNPX MC code for a Siemens Primus linear accelerator. There were no differences in dose distributions in various soft tissues and tissue-equivalent materials. Ghorbani et al. [45] T. Sahmaran, A. Kaskas White et al. [28] determined the dose distribution for the trace element concentrations at healthy or cancerous human tissues with low energy photon sources in brachytherapy. They simulated the dose distribution with Geant4 v9.3 and found that the different trace element concentration between healthy and cancerous prostate tissues affected the dose distribution and it should not be ignored.
In Figure 14, the experimental data performed with pure water and mixture of nine elements (CTCx4) for 6 MV photons, at 10 × 10 cm 2 and 20 × 20 cm 2 field sizes were compared with the simulation.
Conclusion
To investigate the effect of trace element concentrations in tissue on dose distribution, the experimental and simulated PDD values for pure water, healthy and cancerous prostate tissues were obtained at 6 MV photon energy. The experimental values of PDD were in a good agreement with the simulated data using GATE simulation code. There is a difference of less than 2% between the meas-T. Sahmaran, A. Kaskas ured and simulated results. The experimental results and simulations were presented an uncertainty lower than 3%. As expected, it is seen that the difference between the trace element concentrations of healthy and cancerous tissues did not affect the dose distribution at high-energy photons. This is expected and well known result. We believe that this in vitro study is important for proving the reliability of the dose given in radiotherapy treatment once again.
Conflicts of Interest
The authors declare no conflicts of interest regarding the publication of this paper. | 2,531.2 | 2020-06-12T00:00:00.000 | [
"Medicine",
"Physics"
] |
Paclitaxel-Loaded Nanosponges Inhibit Growth and Angiogenesis in Melanoma Cell Models
This study investigated the effects of free paclitaxel (PTX) and PTX-loaded in pyromellitic nanosponges (PTX-PNS) in reducing in vitro and in vivo melanoma cell growth and invasivity, and in inhibiting angiogenesis. To test the response of cells to the two PTX formulations, the cell viability was evaluated by MTT assay in seven continuous cell lines, in primary melanoma cells, both in 2D and 3D cultures, and in human umbilical vein endothelial cells (HUVECs) after exposure to different concentrations of PTX or PTX-PNS. Cell motility was assessed by a scratch assay or Boyden chamber assay, evaluating cell migration in presence or absence of diverse concentrations of PTX or PTX-PNS. The effect of PTX and PTX-PNS on angiogenesis was evaluated as endothelial tube formation assay, a test able to estimate the formation of three-dimensional vessels in vitro. To assess the anticancer effect of PTX and PTX-PNS in in vivo experiments, the two drug formulations were tested in a melanoma mouse model obtained by B16-BL6 cell implantation in C57/BL6 mice. Results obtained were as follows: 1) MTT analysis revealed that cell proliferation was more affected by PTX-PNS than by PTX in all tested cell lines, in both 2D and 3D cultures; 2) the analysis of the cell migration showed that PTX-PNS acted at very lower concentrations than PTX; 3) tube formation assay showed that PTX-PNS were more effective in inhibiting tube formation than free PTX; and 4) in vivo experiments demonstrated that tumor weights, volumes, and growth were significantly reduced by PTX-PNS treatment with respect to PTX; the angiogenesis and the cell proliferation, detected in the tumor samples with CD31 and Ki-67 antibodies, respectively, indicated that, in the PTX-PNS-treated tumors, the tube formation was inhibited, and a low amount of proliferating cells was present. Taken together, our data demonstrated that our new PTX nanoformulation can respond to some important issues related to PTX treatment, lowering the anti-tumor effective doses and increasing the effectiveness in inhibiting melanoma growth in vivo.
INTRODUCTION
Melanomas are a heterogeneous group of aggressive and highly metastatic tumors (Radovic et al., 2012), representing the deadliest form of skin cancer. Nearly half of patients with metastatic melanomas harbor a valine-glutamine substitution in codon 600 of the serine/threonine kinase BRAF (BRAFV600 mutation) (Davies et al., 2002). BRAF inhibitors (BRAFi) target selectively the BRAF V600E/K genetic alteration and are widely used to treat melanoma patients harboring BRAFV600 mutation. Treatment with BRAFi results in high response rates. However, responses are short-lived, with a median time to progression of 5.1-8.8 months (Flaherty et al., 2010;Robert et al., 2015). The addition of a MEK inhibitor to a BRAFi extends the median duration of response from 5.6 to 9.5 months .
Similar results have been observed in patients treated with anti-PD-1 (tumor programmed death ligand 1) monotherapy or a combination of anti-PD-1 and anti-CTLA-4 agents. A recent overall survival (OS) analysis of the phase 3 KEYNOTE-006 trial showed a 33-month OS rate of 50% in patients receiving pembrolizumab monotherapy, an anti-PD-1 monoclonal antibody (Robert et al., 2017). Despite the efficacy of BRAF-targeted and PD-1-related immune therapies in treating metastatic melanoma, a significant number of patients exhibit resistance. Although chemotherapeutic drugs, including dacarbazine, cisplatin, and paclitaxel (PTX), have been used, alone or in combination, without significant survival rate improvement (Bhatia et al., 2009), some patients with metastatic melanoma present remarkable responses to chemotherapeutic agents, even in the absence of a response to modern targeted therapies and immunotherapies (Simon et al., 2017).
PTX was originally isolated from the bark of the Pacific yew tree, Taxus brevifolia, and phase II clinical trials suggested that it had clinical activity in melanoma (Walker et al., 2005). In addition to the microtubule-stabilizer function and the induction of cytotoxicity, PTX has been found to induce immunogenic cell death, which results in augmented CD8+ T cell priming and cytotoxic activity (Song et al., 2017), regulating the immunosuppressive microenvironment in tumor (Pfannenstiel et al., 2010). However, free PTX showed nonselective distribution and poor water solubility (less than 0.3 μg/ml) (Konno et al., 2003). The common PTX formulation approved consists of Cremophor EL ® and ethanol solution, but these solvents present severe side effects. To overcome these limitations, PTX has been embedded in drug nanotherapeutics, including albumin and polymeric micelle nanoparticles to treat several types of tumors (Sofias et al., 2017). These nanoformulations can reduce serious adverse effects of PTX, like allergic reactions, nephrotoxicity, and neurotoxicity, but some of them showed insufficient solubilizing capacity and poor stability (Mittal et al., 2018). For instance, Abraxane ® , human serum albumin-bound PTX nanoparticle, approved by FDA in 2005, reduced off-targeted side effects and improved antitumor efficacy, but after i.v. administration, Abraxane ® rapidly dissociates into the individual constituents. Moreover, the pharmacokinetics and the biodistribution of PTX are not improved (Chen et al., 2018). Thus, identification of non-toxic formulations capable to deliver PTX to the target site and release it in a sustained manner is needed to avoid the nonspecific biodistribution and to prevent the toxicity due to excessive dose of the drug. The novel pyromellitic nanosponges (PNS) showed the capability to act as PTX nanocarrier able to store and release it slowly and in a prolonged manner.
In this paper, we compare the effectiveness of free PTX and PTX-loaded PNS (PTX-PNS) in inhibiting in vitro and in vivo melanoma cell growth and invasiveness and in inhibiting angiogenesis.
Preparation of PTX-Loaded Pyromellitic Nanosponges
PNS were synthetized by reacting β-cyclodextrin with pyromellitic anhydride as crosslinking agent at 1:4 molar ratio (CD/cross-linker). To transform the coarse PNS powder into a nanoformulation suitable for intravenous administration, a top down method was tuned. PNS were suspended in saline solution (NaCl 0.9% w/v) at the concentration of 10 mg/ml and homogenized using a high-shear homogenizer Ultra-Turrax (10 min, 24,000 rpm). Then, a highpressure homogenization (HPH) step was performed to reduce the PNS size, using a high-pressure homogenizer (EmulsiFlex C5, Avastin, 90 min, 500 bar). The PNS nanosuspension was then purified by dialysis (membrane cutoff 12,000 Da). PTX-PNS were obtained by adding PTX solubilized in 50 µl of isopropanol to the blank PNS nanosuspension. The mixture was stirred at room temperature for 24 h.
Physico-Chemical Characterization of PTX-Loaded Pyromellitic Nanosponges
Size, polydispersity index, and zeta potential values of blank and PTX-PNS were measured by dynamic light scattering using a 90 Plus particle sizer (Brookhaven Instruments Corporation, USA). The measurements were performed using diluted PNS samples at a fixed angle of 90° and at a temperature of 25 °C. For zeta potential determination, the samples were placed in the electrophoretic cell, where an electric field of about 15 V/cm was applied.
The PNS morphology was evaluated by transmission electron microscopy (TEM) analysis, using a Philips CM 10 transmission electron microscope. PNS samples were sprayed on Formvarcoated copper grid and air-dried before observation.
Paclitaxel High Performance Liquid Chromatography (HPLC) Quantitative Determination
PTX quantitative determination was carried out by HPLC analysis using a pump (Perkin Elmer Pump 250B, Waltham, MA) equipped with a spectrophotometer detector (Flexar UV/ Vis LC spectrophotometer detector, Perkin Elmer, Waltham, MA). A reverse phase Agilent TC C18 column (150 cm × 4.6 mm, pore size 5 μm; Agilent Technologies, Santa Clara, CA, USA) was used. The column was eluted with acetonitrile/water (60:40) at a flow rate of 1 ml/min. PTX was detected at 227 nm with a UV/vis detector. The drug concentration was calculated using the external standard method from a standard calibration curve.
In vitro Release Studies
The release kinetics of PTX from PTX-PNS was in vitro evaluated. In vitro drug release studies were conducted in a multi-compartment rotating cell, comprising a donor chamber separated from the receiving phase by a cellulose membrane (Spectrapore, cut-off = 12,000 Da); 1 ml of PTX-PNS was placed in the donor chamber. The receiving chamber contained 1 ml of phosphate buffer 0.05 M (pH 7.4 or pH 5.5) added with 10% ethanol to assure drug solubility. The receiving phase was withdrawn at regular intervals and completely replaced with the same amount of fresh buffer to maintain sink conditions. The concentration of PTX in the withdrawn samples was detected by HPLC.
Cell Cultures and Treatments
The following human melanoma cell lines were used: A375 from the American Type Culture Collection (ATCC; Manassas VA), M14, JR8, RPMI7932, PCF-2, and LM from Dr. Pistoia (Gaslini Institute, Genoa, Italy). The mouse melanoma B16-BL6 cell line was obtained from RIKEN, Saitama, Japan (RIKEN is Japan's largest comprehensive research institution renowned for highquality research in a diverse range of scientific disciplines). These cells were cultured in RPMI1640 medium, except A375 that were cultured in DMEM. Both media were supplemented with 10% fetal bovine serum (FBS), 100 units/ml penicillin, and 100 μg/ml streptomycin in a 5% CO 2 , 37 °C incubator. Human umbilical vein endothelial cells (HUVECs) were isolated from human umbilical veins by trypsin treatment (1%) and cultured in M199 medium with the addition of 20% FCS, 100 U/ml penicillin, 100 μg/ml streptomycin, 5 UI/ml heparin, 12 μg/ml bovine brain extract, and 200 mM glutamine. HUVEC were grown to confluence in flasks and used at the 2nd-5th passages. Use of HUVEC was approved by the Ethics Committee of the "Presidio Ospedaliero Martini" of Turin and conducted in accordance with the Declaration of Helsinki. Written informed consent was obtained from patients.
Isolation and Characterization of Primary Melanoma Cells
The primary melanoma cell line (PMel) was isolated from a 77-year-old Caucasian male patient with a superficial spreading melanoma in vertical growth phase, showing infiltration of the papillary dermis and cutaneous ulceration without metastasis. The study was approved by the Committee for human Biospecimen Utilization (ChBU-Department of Medical Sciences, University of Turin). Written informed consent was obtained from the patient for tissue to be used in research. The tissue sample, used for the primary cell culture establishment, was collected from the "left-over tissue" (residual tissue not used for diagnostic and therapeutic purposes) at the Department of Medical Sciences, Pathology Unit, University of Torino (Italy), in sterile tubes containing 10 ml of RPMI serum free medium, supplemented with 1% penicillin-streptomycinfungizone. Primary cell culture isolation was performed as described by Annaratone et al. (2013) with minor modifications. Briefly, the tissue sample was washed three times with the same medium, then finely minced by surgical blades into approximately 1×1 mm fragments which were incubated at 37 °C with collagenase type IV (1 mg/ml; 1∶1 RPMI, final volume 10 ml), for 3-5 h until complete disaggregation of fragments was obtained. Digested tissue samples were shaken vigorously by hand to disaggregate possible residual large clumps. Collagenase activity was blocked by addition of 10 ml of RPMI with 10% FBS. After centrifugation at 800 rcf for 6 min, the cell pellets were re-suspended in complete culture medium. The final cell suspension was seeded in petri dishes as passage 0 and kept in a humidified incubator with 5% CO 2 at 37 °C in DMEM-F12 medium supplemented with 10% FBS, 10 ng/ml human epidermal growth factor (EGF), 5 mg/ml insulin, 400 ng/ml hydrocortisone, 1% L-glutamine, and 1% penicillin-streptomycin-fungizone (Sigma-Aldrich). Culture medium was changed first at the time of cell attachment and, subsequently, three times a week. After three passages, the cells were characterized for two specific melanoma markers, S100 and HMB45 by immunocytochemistry (ICC). ICC was performed by using an automated slide processing platform (Ventana BenchMark XT Autostainer, Ventana Medical Systems, Tucson, AZ, USA). The anti-S00 and the anti-HMB45 antibodies were purchased from DAKO and used following the manufacturer's instructions (Milan, Italy). Both markers were positive (data not show), confirming the isolation of melanoma-type cells. PMel cells were maintained in standard 2D cell cultures, in DMEM-F12 medium supplemented as described above.
Spheroid Formation of Primary Melanoma (PMeI) Cells
PMel cells, cultured in standard 2D condition, were dissociated with trypsin-EDTA into single-cell suspensions. The cells were then seeded on ultra-low attachment (ULA) 96-well flat-bottom plates (Sigma). Optimal seeding densities were established such that melanoma spheroids for both primary cell lines testes fell within a size range of 200 to 500 µm in diameter on day 8, considering appropriate for initiating experimental studies. Representative images of PMel tumor spheroids obtained on day 8, starting from 1×10 4 cells/well, are showed in the Supplementary data, Fig. S1
MTT Assay
The toxic effect of PTX or PTX-PNS was determined through the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay as previously described (Ciamporcero et al., 2018). This colorimetric assay may be interpreted as a measure of both cell viability and cell proliferation (Sylvester, 2011). Cells were seeded (0.8 -1.5 × 10 3 cells/well) in 100 μl of serum-supplemented medium and treated with different concentrations of PTX and PTX-PNS. Untreated cells or cells treated with the empty PNS were used as control. After 72 h, the drug was removed and MTT assay was performed. The optical density (OD) of treated and untreated cells was determined at a wavelength of 570 nm with a microplate reader after 4 h of incubation. Controls were normalized to 100%, and the readings from treated cells were expressed as % of viability inhibition. Eight replicates were used to determine each data point, and five different experiments were performed.
WST-1 Assay
The cytotoxic effect of PTX or PTX-PNS on PMel spheroids was determined by using the 2-(4-iodophenyl)-3-(4-nitrophenyl)-5-(2,4-disulfophenyl)-2H-tetrazolium (WST-1) reagent (Roche, Italy). Cells were seeded (0.8 -1.5 ×10 3 cells/well) in 100 μl of serumsupplemented medium and treated with different concentrations of PTX, PTX-PNS, or PNS. After 72 h, the drug was removed and the WST-1 assay was performed. The OD of treated and untreated cells was determined at a wavelength of 450 nm with a microplate reader after 4 h of incubation. Controls were normalized to 100%, and the readings from treated cells were expressed as % of viability inhibition. Eight replicates were used to determine each data point, and five different experiments were performed.
Crystal Violet Assay
For the quantitative determination of cells adhering to the plate after the 6 h treatment with different concentrations of PTX, PTX-PNS, or PNS, the violet crystal test was used. The violet crystal is a water-soluble dye with affinity for neutral pH DNA, soluble at acidic pH. The cells were washed after treatment, fixed, and stained with crystal violet-methanol. After careful washing, acetic acid was added and the reading was made with a spectrophotometer at 595 nm. The controls were normalized to 100%, and the readings from treated cells were expressed as % of viability inhibition. Four replicates were used to determine each data point, and five different experiments were performed.
Cell Motility Assays
In the wound-healing assay, after starvation for 24 h in serumfree medium, HUVECs were plated onto six-well plates (10 6 cell/ well) and grown to confluence. Cell monolayers were wounded by scratching with a pipette tip along the diameter of the well, and they were washed twice with serum-free medium before their incubation with diverse concentrations of PTX, PTX-PNS, or PNS. Drug concentrations that were not cytotoxic were used for this assay. In order to monitor cell movement into the wounded area, five fields of each wound were photographed immediately after the scratch (T0) and after 24 h . The endpoint of the assay was measured by calculating the reduction in the width of the wound after 24 h and compared to T0, which is set at 100%. The area of wound healing was calculated by using the ImageJ software (Schneider et al., 2012).
In the Boyden chamber (BD Biosciences, San Jose, CA) invasion assay, cells (2 × 10 3 ) were plated onto the apical side of 50 μg/ml Matrigel-coated filters (8.2-mm diameter and 0.5-μm pore size; Neuro Probe, Inc.; BIOMAP snc, Milan, Italy) in serum-free medium with or without increasing concentration of PTX, PTX-PNS, or PNS. Nontoxic drug concentrations were used for this assay. Medium containing VEGF-α (10 ng/ml) was placed in the basolateral chamber as a chemo attractant for HUVEC and FCS 20% for melanoma cancer cells (A2058 and B16-BL6). After 6 h, cells on the apical side were wiped off with Q-tips. Cells on the bottom of the filter were stained with crystal violet, and all the fields were counted with an inverted microscope.
Tube Forming Assay
Nontoxic drug concentrations were used for the tube formation assay. HUVECs were seeded onto 48-well plates (5×10 4 /well) previously coated with 75 μl of growth factor-reduced Matrigel, with or without increasing concentration of PTX, PTX-PNS, or PNS. The morphology of the capillary-like structures formed by the HUVECs was analyzed by an inverted microscope after 6 h of culture, and photographed with a digital camera. Tubule formation was analyzed with an imaging system (Image-Pro Plus Software for microimaging, Media Cybernetics, version 5.0, Bethesda, MD, USA). Tube formation was evaluated by counting the total number of tubes in three wells, and five different experiments were performed. The results were expressed as % inhibition of untreated control cell.
In vivo Experiments
Eight-week-old female C57BL6/J mice (Charles River Laboratories, Wilmington, MA, USA) were injected subcutaneously (s.c.) with B16-BL6 cells (10 5 cells/mouse). The mice were bred under pathogen-free conditions in the animal facility of the Department of Health Sciences (UPO, Novara, Italy). All experimental procedures were done according to Europeans Guidelines and our Institution's ethics commission. After 7 days from the injections, when average tumor dimension reached 5 mm 3 , mice were randomized in a blind fashion into homogenous groups (5 mice per group) and assigned to different treatments. Free PTX or PTX-PNS dissolved in NaCl 0.9% were administrated by tail injection (100 μl/mouse) at the dose of 2.5 mg/kg, every 4 days, for four times. Control mice were injected with empty PNS dissolved in PBS. Treatment-related toxicity was determined by monitoring mouse weight weekly. The tumor size was measured with a caliper, and mice were sacrificed 4 days after the last injection. Euthanasia, collection of tumor samples, tumor weight, and volume determination were performed after 2 weeks from the beginning of treatments.
Histology and Immunofluorescence Anti-CD31 and Anti-Ki-67 on Tumor Sections
Immediately after dissection, tumor samples were embedded in OCT compound (Killik, Bio Optica Milano SpA) and stored at −80 °C until use. Tumor tissues were cut with a cryostat (thickness 4-5 µm) and treated with 4% paraformaldehyde (Sigma-Aldrich) diluted in PBS for 5 min at room temperature to fix the sample on the glass slides. The samples were then blocked with 5% normal goat serum (R&D System) in PBS for 1 h in order to block nonspecific sites to which the primary antibody could bind. To detect CD31 and Ki-67 expression, the primary antibodies used were a polyclonal rabbit anti-CD31 (Abcam, Cambridge, UK) or a monoclonal mouse anti-human Ki-67 antigen (DAKO); both diluted 1:50 and were incubated over night at 4°C in a humid chamber. The secondary antibody used was an anti-rabbit Ig Alexa Fluor 488-conjugated (Thermo-Fisher), or an anti-mouse Ig Alexa Fluor 546-conjugated (Thermo-Fisher); both diluted 1:400, respectively. Then, the sections were stained with 0.5 mg/ml of the fluorescent dye 4,6-diamidino-2-phenylindole-dihydrochloride (DAPI, Sigma-Aldrich) for 5 min to highlight cell nuclei and then mounted using prolong anti-fade mounting medium (SlowFade AntiFADE Kit, Molecular Probes Invitrogen). The sections were then observed by a fluorescence microscope (Leica, Italy) and analyzed by Image Pro Plus Software for micro-imaging 5.0 (Media Cybernetics, version 5.0, Bethesda, MD, USA). Tumor microvessel density (MVD) was measured by evaluating the CD31-positive area; the numbers of positive cells for Ki-67 was calculated in the total tumor area per field upon slide scanning (Panoramic midi II, 3D HISTECH, Budapest, Hungary). Hematoxylin and eosin (Sigma Aldrich, Milan, Italy) staining was performed to assess morphological changes.
Statistical Analysis
Data were expressed as means ± SD. Significance between experimental groups was determined by one-way ANOVA followed by the Bonferroni multiple comparison post-test using GraphPad InStat software (San Diego, CA, USA). Values of p ≤ 0.05 were considered significant.
Physico-Chemical Characterization of PTX-PNS
The physico-chemical parameters of PNS before and after loading with PTX are reported in Table 1. The PNS nanoformulations showed average diameters of about 300 nm and a negative surface charge. The drug incorporation slightly affected the physicochemical characteristics. The zeta potential value remained enough high to avoid aggregation phenomena.
TEM analysis showed the spherical morphology of PNS and confirmed their nanoscale sizes, due to the high pressure homogenization step. Figure 1 (panel A) reports the TEM image of PTX-loaded PNS.
PNS were able to load PTX in a good extent, showing an encapsulation efficiency of about 96.5% and a loading capacity of about 8%.
The release profile of PTX from PTX-PNS was investigated in vitro at two pH values (Figure 1, panel B). Prolonged in vitro release kinetics was demonstrated, and no initial burst effect was observed. The sustained release of PTX from the PTX-PNS confirmed the drug incorporation in the PNS polymer matrix. The percentage of PTX released from the PNS was about 4% at pH 7.4 and 10% at pH 5.5 after 24 h, indicating an enhanced release kinetics at acidic pH.
Effect of PTX and PTX-PNS on Cell Proliferation
To compare the response of cells to free PTX and PTX-PNS, we first analyzed cell viability after 72 h of exposure to different concentrations of PTX (from 10 -7 to 10 -9 M) and PTX-PNS or PNS (from 10 -10 to 10 -13 M). MTT analysis revealed that cells were more affected by PTX-PNS than by free PTX. The effective concentrations ranged from 10 -7 to 10 -8 M for the free PTX in all cell lines; from 3×10 -9 to 10 -13 M PTX-PNS in A2058, JR8, PCF2; from 3×10 -9 to 3×10 -12 M PTX-PNS in M14; and from 3×10 -9 to 10 -12 M PTX-PNS in A375, RPMI7932, and B16-BL6 (Figure 2). The empty PNS did not show any toxicity even at highest doses, and the MTT values were similar to those obtained in untreated cells.
Effect PTX and PTX-PNS in Inhibiting Growth of 2D and 3D Cultures of Primary Melanoma Cells
In primary melanoma PMel cells, PTX and PTX-PNS showed an inhibitory activity of growth at different doses. PTX was effective at doses ranging from 10 -5 to 10 -9 M, whereas PTX-PNS was effective at doses ranging from 10 -9 to 10 -13 M. The empty PNS did not show any toxicity ( Figure 3A). In 3D spheroids, PTX inhibited the growth at concentrations ranging from 10 -5 to 10 -9 M, whereas PTX-PNS inhibited growth at concentrations ranging from 10 -9 to 10 -13 M ( Figure 3B).
Effect of PTX and PTX-PNS in Inhibiting Cell Migration
Tumor growth is favored by tumor angiogenesis, which is continuously activated in cancer resulting in the accumulation of immature and chaotic blood vessels. The acquisition of endothelial cell motility represents the first step of angiogenesis. In order to find the PTX and PTX-PNS nontoxic concentrations in HUVECs, which can be used in the migration test, MTT analysis was performed after 24 h. HUVECs were cultured in the presence and absence of titrated amounts of the different formulations. Results demonstrated that PTX concentration ranging from 10 -8 to 10 -10 M and PTX-PNS concentrations ranging from 10 -12 to 10 -14 M were nontoxic for HUVEC cells at 24 h (Supplementary Data, Figure S2). Thus, PTX concentrations in the range of 10 -8 -10 -10 M and PTX-PNS concentrations from 10 -12 to 10 -14 M were chosen for the woundhealing migration test. Analysis of cells ability to migrate into the scratch showed that only PTX-PNS inhibited HUVEC migration at 10 -12 M ( Figure 4A and B), while PTX was unaffective.
To confirm these results, cell motility was measured by using a Boyden chamber assay, assessing the capability of directional migration and invasion. In order to find the PTX and PTX-PNS concentrations that were not cytotoxic in HUVECs and melanoma FIGURE 2 | Inhibition of melanoma cell proliferation following PTX and PTX-PNS treatment. Cells were treated with increasing concentrations of PTX (from 10 -7 to 10 -9 M) or PTX-PNS (from 10 -10 to 10 -13 M) for 72 h. The results are expressed as % inhibition of control and are the mean ± SD of five separated experiments. **p < 0.01 vs control and PNS, *p < 0.05 vs control and PNS, § § p < 0.01 vs PTX, § p < 0.05 vs PTX.
FIGURE 4 | Wound-healing assay of HUVEC treated with different concentrations of PTX and PTX-PNS. A scratch was made through the HUVEC layer, and then, cells were cultured in the absence (C) or in presence of PTX (from 10 -8 to 10 -10 M) or PTX-PNS (from 10 -12 to 10 -14 M) or PNS (10 -12 M) for 24 h. (A) Microphotographs of the wounded area were taken immediately after the scratch (0 h) and after 24 h, in order to monitor cell migration into the wounded area.
(B) The graph shows mean ± SD (n = 5) of assay endpoints measured by calculating the reduction in the width of the wound after 24 h and compared to T0 which is set at 100%. The area of wound healing was calculated by using the ImageJ software. * * p < 0.01 vs C, § p < 0.05 vs PTX. cells, crystal violet assay was performed after 6-h treatments with titrated amounts of the diverse formulations. Results demonstrated that cell viability was not affected by any concentration of the drug formulations tested (Supplementary data, Table S1). The invasion experiments demonstrated that PTX and PTX-PNS inhibited HUVEC invasion in a concentration dependent-manner; PTX was active at 10 -8 -10 -9 M, whereas PTX-PNS affected cell invasion at concentrations ranging from 10 -12 to 10 -13 M ( Figure 5A). Similar results were obtained for human and mouse melanoma cell lines ( Figure 5B and C). Representative images of crystal violet staining Matrigel-coated filters of the Boyden chambers were reported in Supplementary Data, Figure S3.
PTX and PTX-PNS inhibit angiogenesis
The effect of PTX and PTX-PNS on angiogenesis was evaluated in endothelial tube formation assay, which is able to estimate the formation of three-dimensional vessels in vitro. HUVECs were seeded onto 24-well plates (5×10 4 cell/well) previously coated with 75 μl of growth factor-reduced Matrigel (BD Biosciences), in the absence or presence of nontoxic concentrations of PTX (10 -7 -10 -10 M) or PTX-PNS (10 -10 -10 -14 M) (Supplementary Data, Table S1). The morphology of capillary-like structures formed by HUVEC was analyzed 6 h after culturing. The results showed that PTX and PTX-PNS dose-dependently inhibited endothelial tube formation (Figure 6A). Quantification of the FIGURE 5 | Effect of PTX and PTX-PNS on motility of HUVEC (A), A2058 (B), and B16-BL6 (C) assessed by Boyden chamber assay. HUVECs were plated onto the apical side of Matrigel-coated filters in the presence and absence of either PTX (from 10 -8 to 10 -10 M) or PTX-PNS (from 10 -12 to 10 -14 M). Medium containing VEGF-α (10 ng/ml) or FCS 20% was placed in the basolateral chamber as a chemoattractant for HUVECs or melanoma cell line, respectively. After 6 h, cells on the apical side were wiped off with Q-tips. Cells on the bottom of the filter were stained with crystal violet, and all counted with an inverted microscope. Data are expressed as mean ± SD (n = 5) of number of migrated cells. * * p < 0.01, vs VEGF-α or FCS, § § p < 0.01, vs PTX. inhibition is shown in Figure 6B. PTX inhibited tube formation at the doses ranging from 10 -7 to 10 -9 M, whereas PTX-PNS were more effective inhibiting tube formation at the doses ranging from 10 -10 to 10 -13 M.
PTX and PTX-PNS Anticancer Effect in Xenograft Tumor Model
To assess the anticancer effect of PTX and PTX-PNS in in vivo experiments, we implanted B16-BL6 cells, in C57/BL6 mice, and we treated animals with the two drug formulations. Results showed that tumor weights (Figure 7A), volumes (Figure 7B), and growth ( Figure 7C) were significantly reduced by PTX-PNS treatment compared to those detected in the mice treated with either PBS, empty PNS, or free PTX at the dose of 2.5 mg/kg. By contrast, PTX did not show any significant effect. Analysis of tumor vasculature was assessed by staining CD31 in the tumor sections and showed that vascular density (MVD) was significantly lower in the tumors from mice treated with PTX-PNS than in those treated with either PBS, empty PNS, or free PTX (Figure 8A and B). The number of positive cells for Ki-67 ( Figure 8C and D) confirmed previous results. All treatments were well tolerated by the animals without significant weight loss in any group.
DISCUSSION
The use of nanodelivery systems offers some advantages that can improve the therapeutic efficacy of anticancer drugs. Indeed, nanoformulations could increase drug concentration at the tumor site, decreasing the total dose administered, and subsequently reducing the side effects (Duchene et al., 2016, Prasad et al., 2018. In particular, cyclodextrin-based nanosponges (NS) have been proposed for cancer nanotherapeutic development , Trotta et al., 2014Swaminathan et al., 2016). The use of NS as nanocarrier for PTX delivery was previously investigated. PTX was encapsulated in NS obtained by reacting CDs with diphenylcarbonate as cross-linker (Ansari et al., 2011).
The in vivo behavior of PTX-NS was studied after oral administration to rats, showing an increase of the drug oral bioavailability (Torne et al., 2010). Moreover, PTX showed the capability to be incorporated in a great extent in another type of NS, obtained using carbonyldiimidazole as crosslinking agent (Mognetti et al., 2012). Here, we evaluated the use of PNS as nanovehicle of PTX. Previously, PNS showed non-toxic effect in vitro and in vivo in acute and repeated dose toxicity studies (Shende et al., 2015). PNS were able to incorporate PTX, increasing its apparent aqueous solubility. Indeed, it has an extremely low aqueous solubility (less FIGURE 6 | Tube formation assay of HUVEC treated with different concentrations of PTX or PTX-PNS. HUVEC were plated in the presence and absence of PTX (from 10 -7 to 10 -10 M) or PTX-PNS (from 10 -10 to 10 -14 M). (A) The morphology of the capillary-like structures formed by the HUVEC was analyzed by an inverted microscope after 6 h of culture, and photographed with a digital camera. (B) The graph shows the tube formation, evaluated by counting the total number of tubes in three wells of five different experiments. The results were expressed as % inhibition of untreated control cells. Data are expressed as mean ± SD. * * p < 0.01, vs.VEGF-α or FCS, * p < 0.05, vs VEGF-α or FCS, § § p < 0.01, vs. PTX, § p < 0.05, vs PTX. than 0.3 µg/) and it is currently dissolved in a mixture of Cremophor EL ® (polyoxyethyleneglycerol triricinoleate 35) and dehydrated ethanol (1:1 v/v) in the commercial intravenous dosage form. The PTX incorporation in the PNS nanostructure was confirmed by the slow and prolonged in vitro release kinetics of the drug from PTX-NS. Interestingly, results demonstrated that PTX-PNS inhibited melanoma cell growth more effectively than free PTX. The inhibitory activity on cell proliferation was effective on all of the melanoma cell lines used in this study, including a PMel. Moreover, the cytotoxicity of PTX-PNS was displayed at concentrations which were a thousand times lower than those displayed by free PTX. PTX-PNS significantly inhibited the proliferation of primary tumor cells in both 2D and 3D melanoma cell cultures with the same effectiveness. Results on melanoma spheroid 3D cultures were particularly relevant since 3D-cultured cells acquired morphological and cellular features which are more similar to solid tumors than 2D cultures. In particular, Ma et al. (2012) compared nanoparticle penetration properties of different culture systems and reported that 3D spheroids of HeLa cells displayed similar morphologic features of human solid tumors, including a resistance to chemotherapeutics that could not be observed in 2D cultures. In line with this observation, it has been suggested that 3D spheroids may be a useful simplified model of tumor tissue for in vitro testing of anticancer therapeutics (Edmondson et al., 2014;Huang et al., 2015).
The effectiveness of PTX-PNS in 3D cultures demonstrated that this nanoformulation is effective on a tumor-like environment mimicking several features of tumors involved in chemotherapy resistance such as three-dimensional architecture, cell-cell interaction, and hypoxia.
Cancer metastasis is associated with stimulation of cancer cell migration and invasion of the neighboring tissues. In line with previous results, PTX-PNS inhibited cell melanoma invasion at concentrations which were much lower than those displayed by free PTX.
Cancer progression is also associated with stimulation of tumor neoangiogenesis producing newly formed vessels to feed the tumor. This process involves endothelial cell migration and generation of tubule-like structures to form vessels. PTX can reduce endothelial cell migration at concentrations ranging from 10 -7 to 10 -9 M, according to previous reports showing taxane effects on cell FIGURE 7 | In vivo experiments on mouse melanoma model. C57BL6/J mice were injected subcutaneously with B16-BL6 cells (10 5 cells/mouse). 7 days after the tumor injection, mice were treated every 4 days for 2 weeks by i.v. injection of PTX, PTX-PNS, and PNS (2.5 mg/kg, 100 μl/mouse) or the same volume of NaCl 0.9% as control (five mice/group). Mice were sacrificed at the end of the experiment. Graphs show (A) tumor weight (mg, mean ± SD), (B) tumor volume curves (cm 3 , mean ± SD), and (C) tumor growth (cm 3 , mean ± SD). Tumors were evaluated every 4 days, after the first treatment performed at T1 (i.e., when they were palpable). Data are expressed as mean ± SD. * p < 0.05, vs VEGF-α or FCS, § § p < 0.01, vs. PTX, § p < 0.05, vs PTX. migration (Ballestrem et al., 2000). However, also in this case, PTX-PNS was much more effective in inhibiting HUVEC migration and invasion than free PTX. A similar pattern in in vitro tubulogenesis of endothelial cells was observed, since the inhibitory effect of PTX was obtained at nanomolar concentrations, in line with previous data from Taraboletti et al. (2002). Intriguingly, PTX-PNS were able to inhibit tubulogenesis at lower concentrations, than free PTX.
Finally, we demonstrated that PTX-PNS was more effective than PTX in inhibiting the in vivo growth of melanoma cells in a mouse model also. Indeed, both the weight, the volume, and the growth of melanoma were significantly reduced in mice treated with PTX-PNS whereas no significant inhibition was obtained with the same dose of free PTX. The results on angiogenesis and proliferation rate of tumor cells in vivo are in agreement with the in vitro experiments since the microvessel density in the tumor and the percentage of Ki-67 positive cells was significantly decreased by treatment with PTX-PNS, whereas no significant effect was obtained upon treatment with free PTX.
Taken together, our data demonstrated that our new PTX nanoformulation can respond to some important issues related to PTX treatment, such as solubility and toxicity. The PTX incorporation in nanosponges might allow to lower the anti-tumor doses and increase its effectiveness in inhibiting melanoma cell model.
ETHICS STATEMENT
All experimental procedures were done according to Europeans Guidelines and our institution's ethics commission. | 8,180.2 | 2019-07-12T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Byte Pair Encoding is Suboptimal for Language Model Pretraining
The success of pretrained transformer language models (LMs) in natural language processing has led to a wide range of pretraining setups. In particular, these models employ a variety of subword tokenization methods, most notably byte-pair encoding (BPE) (Sennrich et al., 2016; Gage, 1994), the WordPiece method (Schuster and Nakajima, 2012), and unigram language modeling (Kudo, 2018), to segment text. However, to the best of our knowledge, the literature does not contain a direct evaluation of the impact of tokenization on language model pretraining. We analyze differences between BPE and unigram LM tokenization, finding that the latter method recovers subword units that align more closely with morphology and avoids problems stemming from BPE’s greedy construction procedure. We then compare the fine-tuned task performance of identical transformer masked language models pretrained with these tokenizations. Across downstream tasks and two languages (English and Japanese), we find that the unigram LM tokenization method matches or outperforms BPE. We hope that developers of future pretrained LMs will consider adopting the unigram LM method over the more prevalent BPE.
Introduction
Large transformers (Vaswani et al., 2017) pretrained with variants of a language modeling objective, such as BERT (Devlin et al., 2019), have proven their effectiveness at flexibly transferring to a variety of domains and tasks. One design decision that makes them particularly adaptable is their graceful handling of the open vocabulary problem through subword tokenization. Subword tokenization, popularized in the neural machine translation literature (Sennrich et al., 2016;Vaswani et al., 2017;Wu et al., 2016), produces tokens at multiple levels of granularity, from individual characters to full words. As a result, rare words are broken down into a collection of subword units, bottoming out in characters in the worst case.
Critically, a pretrained language model's subword vocabulary cannot be altered: any downstream application of these models must tokenize input or generate output using the original subword vocabulary, making the choice of tokenization a particularly significant decision.
A variety of subword tokenization methods have seen use in pretrained language models. BERT uses the WordPiece method (Schuster and Nakajima, 2012), a language-modeling based variant of BPE; T5 (Raffel et al., 2019) uses character-level BPE; GPT2 (Radford et al., 2019) and ROBERTA use BPE over raw bytes instead of unicode characters; XLNET (Yang et al., 2019) and ALBERT (Lan et al., 2019) use the Sentence-Piece library (Kudo and Richardson, 2018) which implements both BPE and unigram language model tokenization, but in both cases fail to clarify which of these methods they chose. The effects of tokenization are not examined in a reported experiment in any of the above works except , who note that WordPiece gave a small advantage over BPE in their preliminary investigation. In the machine translation literature, Kudo (2018) introduced the unigram language model tokenization method in the context of machine translation and found it comparable in performance to BPE. Domingo et al. (2018) performed further experiments to investigate the effects of tokenization on neural machine translation, but used a shared BPE vocabulary across all experiments. Gallé (2019) examined algorithms in the BPE family, but did not compare to unigram language modeling.
In this work, we characterize the space of proposed subword tokenization algorithms and analyze the differences between the two methods with publicly available implementations: BPE (merging tokens based on bigram frequency) and unigram language modeling (pruning tokens based on unigram LM perplexity). While the vocabularies resulting from these schemes are heavily overlapping, we compare each method to reference morphological segmentations and find that the unigram LM method produces tokens better aligned with morphology. To understand whether this more natural tokenization leads to improved performance, we pretrain separate language models using the ROBERTA objective with each tokenization for both English and Japanese, two typologically distant languages. On downstream tasks, we find a performance gap across tasks and languages, with the unigram LM method providing an improvement over BPE of up to 10% in our Japanese QA experiments, indicating the benefits of adopting this technique in the context of language model pretraining.
Algorithms
Subword tokenization algorithms consist of two components: a vocabulary construction procedure, which takes a corpus of text and returns a vocabulary with the desired size, and a tokenization procedure, which takes the built vocabulary and applies it to new text, returning a sequence of tokens. In theory, these two steps can be independent, although for the algorithms we examine the tokenization procedure is tightly coupled to the vocabulary construction procedure.
A BPE vocabulary is constructed as follows: Algorithm 1 Byte-pair encoding (Sennrich et al., 2016;Gage, 1994) Replace each occurrence of t L , t R in 10: D with t NEW 11: end while 12: return V 13: end procedure BPE tokenization takes the vocabulary V con-taining ordered merges and applies them to new text in the same order as they occurred during vocabulary construction.
The WordPiece algorithm (Schuster and Nakajima, 2012), used to construct BERT's vocabulary, closely resembles BPE. However, instead of merging the most frequent token bigram, each potential merge is scored based on the likelihood of an n-gram language model trained on a version of the corpus incorporating that merge. Schuster and Nakajima (2012) note that the process of estimating language model parameters for every potential merge is prohibitive, so they employ aggressive heuristics to reduce the number of potential merges considered. As their implementation is not public, 1 we are unable to make a comparison to this method.
The unigram LM method (Kudo, 2018), in contrast to the bottom-up construction process of BPE and WordPiece, begins with a superset of the final vocabulary, pruning it to the desired size: for t ∈ V do Estimate token 'loss' 8: where θ is the LM without token t return V, θ 17: end procedure Unigram LM tokenization takes the vocabulary V and unigram LM parameters θ and performs Viterbi inference to decode the segmentation with maximum likelihood under θ. This method is similar to Morfessor's unsupervised segmentation (Creutz and Lagus, 2005) without its informed prior over token length.
Translation Magnetism is classified in various ways. Figure 1: Example tokenizations. The character ' ' is a word boundary marker. BPE merges common tokens, such as English inflectional suffixes and Japanese particles, into their neighbors even when the resulting unit is not semantically meaningful. In the course of our experiments we did not observe a major difference in speed between the two algorithms. Both require similar amounts of time to construct a vocabulary, and both have a negligible impact on overall model inference latency.
Morphology
In Figure 1 we illustrate the differences in tokenization output between BPE and the unigram LM method. We observe that the unigram LM method produces subword units that qualitatively align with morphology much better than those produced by BPE. In particular, we note that the unigram LM method recovers common affixes such as -ly, -s, pre-, and triwhile BPE does not, instead absorbing them into adjacent units (-cles) while also producing meaningless single-character units.
This trend is supported by we observe that recognizable affixes appear much more frequently in the unigram LM tokenization of our pretraining corpus than in the BPE tokenization. As the BPE tokenization is constructed greedily according to frequency, common affixes (and punctuation) are frequently absorbed into other tokens. 2 We see in Figure 2a that the unigram LM tokenization tends to have longer subword units than BPE. This is closer to the length distribution of gold-standard English morphs, which have a mean length of approximately 6 characters (Creutz and Linden, 2004).
Comparison with morphological segmenters
In Table 3, we further corroborate these observations by performing a quantitative evaluation of the degree to which each unsupervised segmentation algorithm aligns with morphological baselines for each language. For English, we produce gold surface allomorph boundaries from the CELEX2 lexical database (Baayen et al., 1995) in the manner of Creutz and Lindén (2004). We then compare each algorithm's subword unit boundaries with gold morpheme boundaries for words with 2 or more morphemes, weighted by their frequency in English Wikipedia. For Japanese, we compare subword tokenizations of Japanese Wikipedia sentences to morphological reference tokenizations produced using the MeCab morphological analysis and tokenization tool (Kudo, 2006) using version 2.3.0 of the UniDic dictionary (Den et al., 2007).
We find that for both languages, the segmentations produced by the unigram LM method correspond more closely to the morphological references, confirming our qualitative analysis. On English data, both unsupervised methods exhibit low boundary recall; we attribute this to the fact that they represent many common words with underlying derivational morphology as single tokens, although for BPE this is compounded by effects we discuss in Section 3.2.
The ability of the unigram LM method to recover the morphological structure of the text without explicit supervision aligns with the main findings of Creutz and Lagus (2005), who successfully use maximum-a-posteriori unigram language models to perform unsupervised morphological segmentation of English and Finnish.
Vocabulary Allocation
By surfacing subword units that align with morphology, the unigram LM tokenization provides the opportunity for the model to learn composable subword embeddings. If an affix reliably signals a linguistic feature, rather than needing to store that information redundantly across the embeddings of many tokens containing the affix, the model can store it in just the embedding of the affix.
These results suggest that the unigram LM method may allocate its vocabulary more economically. We note in Figure 2b that both vocabularies contain a "dead zone" of tokens whose frequency is much lower than the rest of the vocabulary. This is largely the result of the presence of a number of very uncommon characters, including Chinese and Japanese kanji, in the training corpus. In the BPE tokenization, however, this effect is exacerbated, with the dead zone containing about 1500 more entries as a result of the tendency of its vocabulary construction process to produce intermediate "junk" tokens. For example, in the case where three tokens almost always occur as a group, in order to merge them into a single token, BPE must first merge one pair before incorporating the third token; this leaves an intermediate token in the vocabulary that will only occur rarely on its own. Additionally, tokens that appear in many contexts, such as inflectional affixes (-s, -ed), will tend to merge with many adjacent units due to their frequency. However, these merges lead to embedding redundancy, as these affixes usually have the same linguistic function in every context. Since the unigram LM method selects tokens during vocabulary construction using a global optimization procedure, it does not produce junk tokens; this property also allows it to avoid merging frequent tokens with their neighbors too aggressively.
Japanese vocabulary comparisons are included Table 4: Fine-tuning results. Metrics are averaged across 5 fine-tuning seeds with standard deviations indicated by ±; due to computational constraints we did not pretrain more than once per tokenization. We include finetuning results for a transformer with a comparable architecture, BERT BASE , for reference, although we note that a direct comparison cannot be made due to BERT BASE using both a larger pretraining corpus and a larger subword vocabulary.
in Appendix B.
Downstream Task Experiments
In order to make a fair experimental comparison between these two methods on downstream tasks, we do not use an existing pretrained language model like BERT, but instead train our own language models from scratch, controlling for the data, training objective, and optimization procedure. We pretrain four transformer masked language models using the architecture and training objective of ROBERTA-BASE using the reference fairseq implementation . Two are pretrained on the text of English Wikipedia, comprising ∼3B tokens under either tokenization. The other two are pretrained on the text of Japanese Wikipedia, comprising ∼0.6B tokens. In each pair, one model is pretrained on the BPE tokenization of the corpus, and the other on the unigram LM tokenization, each with a vocabulary of 20,000 tokens. Hyperparameters are listed in Appendix A. We subsequently fine-tune each of the pretrained English models on the SQuAD question-answering task (Rajpurkar et al., 2016), the MNLI textual entailment task (Williams et al., 2018), and the English portion of the CoNLL 2003 named-entity recognition shared task (Tjong Kim Sang and De Meulder, 2003). We fine-tune the Japanese models on the Japanese minimal-answer subset of the TyDi question-answering task (Clark et al., 2020). We base our fine-tuning implementations on those of the transformers toolkit (Wolf et al., 2019).
The results of our fine-tuning experiments are presented in Table 4. We show that fine-tuning models pretrained with unigram LM tokenization produces better performance than fine-tuning models pretrained with BPE tokenization for all tasks. These results suggest that the higher morpholog-ical plausibility of the unigram LM tokenization may translate into better downstream task performance as well. Larger performance gaps are evident on SQuAD and MNLI, but the largest gap appears on Japanese TyDi. Differences in pretraining may be more evident in this setting due to the fact that the Japanese portion of the TyDi training split only contains ∼5k examples, compared to the ∼88k examples available for fine-tuning on SQuAD. Additionally, written Japanese does not feature whitespace between words, so it is possible for tokenizations to differ in word boundary placement as well as subword segmentation.
Conclusion
In this work we show that the choice of input encoding makes a difference in how well pretrained language models are able to perform end tasks. This indicates that tokenization encodes a surprising amount of inductive bias, and we suggest that unigram LM tokenization may be the better choice for development of future pretrained models. | 3,161.2 | 2020-04-07T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Fredholm Property of Nonlocal Problems for Integro-Differential Hyperbolic Systems
The paper concerns nonlocal time-periodic boundary value problems for first-order Volterra integro-differential hyperbolic systems with boundary inputs. The systems are subjected to integral boundary conditions. Under natural regularity assumptions on the data it is shown that the problems display completely non-resonant behaviour and satisfy the Fredholm alternative in the spaces of continuous and time-periodic functions.
The Volterra integral terms in (1.1) are motivated by the aforementioned applications (see, e.g., [13,18]). As it will be seen from our proof of Theorem 1.2, our analysis applies also to the case when these terms are replaced by the Fredholm integral terms.
In general, systems of the type (1.1), (1.3) model a broad range of physical problems such as traffic flows, chemical reactors and heat exchangers [18]. They are also used to describe problems of population dynamics (see, e.g., [3,7,15,20] and references therein) and polymer rheology [4]. Moreover, they appear in the study of optimal boundary control problems [13,16,18,19].
Establishing a Fredholm property is a first step in developing a theory of local smooth continuation [12] and bifurcation [1,2,11] for Fredholm hyperbolic operators, in particular, such tools as Lyapunov-Schmidt reduction. Buono and Eftimie [1] consider autonomous 2×2 nonlocal hyperbolic systems in a single space variable, describing formation and movement of various animal, cell and bacterial aggregations, with some biologically motivated integral terms in the differential equations. One of the main results in [1] is a Fredholm alternative for the linearizations at a steady-state, which enables performing a bifurcation analysis by means of the Lyapunov-Schmidt reduction. Here we continue this line of research, establishing the Fredholm property for a wide range of non-autonomous nonlocal problems for (n × n)-hyperbolic systems, with nonlocalities both in the differential equations and in the boundary conditions. We show that the problem (1.1)-(1.3) demonstrates a completely non-resonant behavior (in other terms, no small divisors occur). More precisely, we prove the Fredholm alternative for (1.1)-(1.3) under the only assumptions that the coefficients in (1.1) and (1.3) are sufficiently smooth and a kind of Levy condition is fulfilled. The proof extends the ideas of [9,10] for proving the Fredholm alternative for first-order one-dimensional hyperbolic systems with reflection boundary conditions, and also the ideas of [8] for proving a smoothing property for boundary value hyperbolic problems. In contrast to [9] and [10], where conditions excluding a resonant behavior are imposed, the present Fredholmness result is unconditional, in this respect.
Our result
By C n,2π we denote the vector space of all 2π-periodic in t and continuous maps u : Similarly, C 1 n,2π denotes the Banach space of all u ∈ C n,2π such that ∂ x u, ∂ t u ∈ C n,2π , with the norm For simplicity, we skip subscript n if n = 1 and write C 2π and C 1 2π for C 1,2π and C 1 1,2π , respectively.
We make the following natural assumptions on the coefficients of (1.1) and (1.3): a j ∈ C 1 2π and b jk , ∂ t b jk , g jk , h jk , r jk , ∂ t r jk ∈ C 2π for all j ≤ n and k ≤ n, and for all 1 ≤ j = k ≤ n there existsb jk ∈ C 2π such that ∂ tbjk ∈ C 2π and b jk =b jk (a k − a j ). (1.6) The assumption (1.5) is standard and means the non-degeneracy of the hyperbolic system (1.1). The assumption (1.6) is a kind of the well-known Levy condition appearing in various aspects of the hyperbolic theory, for instance, for proving the spectrum-determined growth condition for semiflows generated by initial value problems for hyperbolic systems [5,14,17]. It plays also a crucial role in the Fredholm analysis of hyperbolic PDEs (see Example 1.3 below). Given j ≤ n, x ∈ [0, 1], and t ∈ R, the j-th characteristic of (1.1) is defined as the solution ξ ∈ [0, 1] → ω j (ξ, x, t) ∈ R of the initial value problem , ω j (x, x, t) = t. (1.7) To shorten notation, we will write ω j (ξ) = ω j (ξ, x, t). In what follows we will use the equalities where by ∂ i here and below we denote the partial derivative with respect to the i-th argument. Set and Integration along the characteristic curves brings the system (1.1)-(1.3) to the integral form This is a linear inhomogeneous ordinary differential equation for the function u j (·, ω j (·, x, t)), and the variation of constants formula (with initial condition at x j ) gives Inserting the boundary conditions (1.3) and using the notation (1.10), we get (1.11), as desired.
This problem is a particular case of (1.1)-(1.3) and satisfies all assumptions of Theorem 1.2 with the exception of (1.6). It is straightforward to check that are infinitely many linearly independent solutions to the problem (1.12)-(1.14) and, therefore, the kernel of the operator of (1.12)-(1.14) is infinite dimensional. Thus, the conclusion of Theorem 1.2 is not true without (1.6).
Then the system (1.11) can be written as the operator equation Note that Theorem 1.2 says exactly that the operator I −R−B−G−H : C n,2π → C n,2π is Fredholm of index zero. Nikolsky's criterion [6, Theorem XIII.5.2] says that an operator I + K on a Banach space is Fredholm of index zero whenever K 2 is compact. It is interesting to note that the compactness of K 2 and the identity I − K 2 = (I + K)(I − K) imply that the operator I − K is a parametrix of the operator I + K (see [21]).
We, therefore, have to show that the operator K 2 : C n,2π → C n,2π for K 2 = (R + B + G+H) 2 is compact. Since the operators R, B, G, and H are bounded and the composition of a bounded and a compact operator is compact, it is enough to show that the operators H, G, R 2 , RB, B 2 , BR : C n,2π → C n,2π are compact. (2.4) We start with the compactness of H. By C 2π (R) we denote the space of all continuous and 2π-time-periodic maps v : R → R. Fix arbitrary j ≤ n and k ≤ n and define the operator H jk ∈ L(C 2π (R), C 2π ) by (2.5) It suffices to show the compactness of H jk . Change the variable ξ to z = ω j (ξ) and denote the inverse map by ξ =ω j (z) =ω j (z, x, t). Afterwards (2.5) reads By the regularity assumption (1.4), the functions ω j (x j ),ω j (z), d j (ξ, x, t), h jk (x, z), and a j (x, z) are continuous in all their arguments and 2π-periodic in t and, hence, are uniformly continuous in x and t. Then the equicontinuity property of (H jk v)(x, t) for v over a bounded subset of C 2π (R) straightforwardly follows. Using the Arzela-Ascoli precompactness criterion, we conclude that H jk and, hence, H are compact. Now we consider the operator G. Changing the variable ξ to z = ω j (ξ, x, t) in (2.2), we get Similarly to the above, the functions ω j (x j ),ω j (z), d j (ω j (z), x, t), and a j (ω j (z), z) are 2πperiodic in t and uniformly continuous in x and t. This entails the equicontinuity property for (Gu) j (x, t) for u over a bounded subset of C n,2π . The compactness of G again follows from the Arzela-Ascoli theorem.
We further proceed with the compactness of R 2 . For j ≤ n and k ≤ n define operators R jk ∈ L(C 2π ) by Fix arbitrary j ≤ n, k ≤ n, and i ≤ n. We prove the compactness of the operator R jk R ki ; the compactness of all other operators contributing into the R 2 will follow from the same argument. Introduce operators P j , Q jk : C 2π → C 2π by Then we have We aim at showing the compactness of P j Q jk P k , as this and the boundedness of Q ki will entail the compactness of R jk R ki . The operator P j Q jk P k reads r jk (ξ, ω j (x j , ξ, t))c k (x k , ξ, ω j (x j , ξ, t)) 1 0 w(η, ω k (x k , ξ, t)) dηdξ. (2.10) Changing the variable ξ to z = ω k (x k , ξ, t), we get (2.12) Similarly to the above, the compactness of P j Q jk P k now immediately follows from the regularity assumption (1.4) and the Arzela-Ascoli theorem. Now we treat the operator ×b kl (ξ, ω k (ξ, η, ω j (x j )))u l (ξ, ω k (ξ, η, ω j (x j ))) dξdη for an arbitrary fixed j ≤ n. After changing the order of integration we get the equality ×b kl (ξ, ω k (ξ, η, ω j (x j )))u l (ξ, ω k (ξ, η, ω j (x j ))) dηdξ.
Then we change the variable η to z = ω k (ξ, η, ω j (x j )). Since the inverse is given by where ∂ 3ωk (ω j (x j ), ξ, z) is given by (2.12). The functions ω j (ξ, x, t) and the kernels of the integral operators in (2.13) are continuous and t-periodic functions and, hence, are uniformly continuous functions in x and t. This means that we are again in the conditions of the Arzela-Ascoli theorem, as desired.
We proceed to show that B 2 : C n,2π → C n,2π is compact. By the Arcela-Ascoli theorem, C 1 n,2π is compactly embedded into C n,2π . Then the desired compactness property will follow if we show that B 2 maps continuously C n,2π into C 1 n,2π . (2.14) By using the equalities (1.8), (1.9), and (2.1), the partial derivatives ∂ x B 2 u, ∂ t B 2 u exist and are continuous for each u ∈ C 1 n,2π . Since C 1 n,2π is dense in C n,2π , the desired condition (2.14) will follow from the bound To prove (2.15), for given j ≤ n and u ∈ C 1 n,2π , let us consider the following representation for (B 2 u) j (x, t) obtained after the application of the Fubini's theorem: for all j ≤ n, ϕ ∈ C 1 (R), x, ξ ∈ [0, 1], and t ∈ R, one can easily check that for all j ≤ n and u ∈ C 1 n,2π .
The desired estimate (2.18) now easily follows from the assumptions (1.4)-(1.6). Returning back to (2.4), it remains to prove that the operator BR : C n,2π → C n,2π is compact. By the definitions of B and R, ×r kl (η, ω k (x k , ξ, ω j (ξ)))u l (η, ω k (x k , ξ, ω j (ξ))) dξdη, j ≤ n. (2.21) The integral operators in (2.21) are similar to those in (2.16) and, therefore, the proof of the compactness of BR follows along the same line as the proof of the compactness of B 2 . The proof of Theorem 1.2 is complete. | 2,699.2 | 2015-08-04T00:00:00.000 | [
"Mathematics"
] |
Validating Dominator Trees for a Fast, Verified Dominance Test
The problem of computing dominators in a control flow graph is central to numerous modern compiler optimizations. Many efficient algorithms have been proposed in the litterature, but mechanizing the correctness of the most sophisticated algorithms is still considered as too hard problems, and to this date, verified compilers use less optimized implementations. In contrast, production compilers, like GCC or LLVM, implement the classic, efficient Lengauer-Tarjan algorithm [12], to compute dominator trees. And subsequent optimization phases can then determine whether a CFG node dominates another node in constant time by using their respective depth-first search numbers in the dominator tree. In this work, we aim at integrating such techniques in verified compilers. We present a formally verified validator of untrusted dominator trees, on top of which we implement and prove correct a fast dominance test following these principles. We conduct our formal development in the Coq proof assistant, and integrate it in the middle-end of the CompCertSSA verified compiler. We also provide experimental results showing performance improvement over previous formalizations.
Introduction and Related Work
Given a control flow graph (CFG) with a single entry node, computing dominators consists in determining, for each node in the graph, the set of nodes that dominate it.Informally, a node d dominates another node n if d belongs to every path from the entry node to n.The problem of computing dominators is ubiquitous in computer science, and occurs in applications ranging from program optimization, to circuit testing, analysis of component systems, and worst-case execution time estimation.
Since 1972, this problem has been extensively studied.Many algorithms have been proposed, trading-off ease of implementation and efficiency.The natural formulation of the problem as data-flow equations is due to Allen and Cocke [1].It can be directly implemented using an iterative Kildall algorithm, but suffers, in this case, from a quadratic asymptotic complexity.Cooper et al. [4] present another iterative solution for this equation system, based on a more compact representation of dominator sets (only the immediate dominator, i.e. the closest dominator, is computed for each node), and a careful implementation, leading to better performance in practice, despite the same worst-case bound time as [1].To date, the most popular algorithm remains the one by Lengauer-Tarjan [12], which, as Cooper et al. algorithm, computes a compact representation of the dominance relation (namely the dominator tree).But this sophisticated algorithm relies on depth-first search (DFS) spanning tree of the CFG with elaborate path compression and tree balancing techniques to achieve a stunning near-linear complexity.We refer the interested reader to [16] for a more complete survey of the numerous algorithms proposed so far in the literature, and to [10] for a thorough experimental study comparing the leading algorithms.
We consider the problem of dominators in the specific context of compilation, where dominators allow, for instance, the implementation of a variety of powerful and efficient program optimizations (e.g.loops optimization or global code motion), and the construction of the SSA form [5], an intermediate representation of code that is specially tailored towards program optimization.Production compilers, like GCC or LLVM, implement the classic, efficient Lengauer-Tarjan algorithm [12], to compute dominator trees.Subsequent optimization phases can then determine whether a node dominates another node in constant time by using their respective DFS traversal numbers in the dominator tree.Specifically, the present work is part of a compiler verification effort, where an (optimizing) compiler must be formally proved to preserve the program behaviors along the compilation chain, i.e. the generated code behaves as prescribed by the semantics of the source program, if any.In this context, correctly implementing a time-and space-efficient dominator algorithm is not sufficient; one has to formally prove its correctness.We are not aware of any formal verification of the dominator problem outside of the field of compiler verification.Further, faced with this technical difficulty, existing verified compilers either ignore dominators, or implement simplified and under-optimized versions of dominator algorithms.
For instance, the CompCert C compiler [14,13] is not based on any SSA form for performing code optimization, and no global optimization uses explicit dominance information.The CompCertSSA project extends the CompCert compiler with an SSA-based middle-end.The SSA generation algorithm [2] is proved by a posteriori validation of an external checker.Although we prove that the checker ensures the strictness of the SSA generated function (that is, each variable use is dominated by its definition), the checker implementation (a simple, non-iterative, CFG traversal) and soundness proof do not rely on the computation of dominators.The only phase of the CompCertSSA middle-end that depends on such a computation (that we would like to be efficient) is a common sub-expression elimination (CSE) optimization based on Global Value Numbering (GVN).It discovers equivalence classes between program variables, where variables belonging to the same class are supposed to evaluate to the same value.Its implementation, presented in [7], closely follows the choices made in production compilers, and performs some dominance test requests to make sure that the chosen representative of a variable class dominates the definition point of that variable.To date, this dominance test was implemented (and proved directly) with a simple Allen and Cocke algorithm, using a Kildall workset algorithm, thus impacting the performance of our middle-end.
Another SSA-based verified compiler is Vellvm.Zhao et al. [20,18] formalize the LLVM SSA intermediate form and its generation algorithm in Coq.Their work follows closely the LLVM design and their verified transformation can be run inside the LLVM platform itself.Zhao et al. [19] formalize in Coq a fast dominance computation based on the Cooper et al. algorithm [4], but their algorithm is, for verification purposes, a simplified version of the initial algorithm.This is a non-trivial formalization work that also proves in Coq the completeness of the dominance relation computation, an interesting and difficult problem in itself.However, this work does not focus on compilation time.Other CPS or ANF-based verified compilers for functional languages [3,6] implement simple optimizations that do not require dominance information, although their (unverified) peers, like MLton, benefit from dominators for, e.g.contification [8] for inter-procedural optimization.
Facing the conceptual complexity of the most clever variants for computing dominators, there has been a growing interest in proposing ways of checking their results.Georgiadis et al. [11] propose a linear-time checker of the dominator tree, based on the notions of headers and loop nesting forests.Georgiadis et al. [9] propose a linear-time certifying algorithm, producing a certificate (a preorder of the vertices of the dominator tree, with a so-called property low-high), that helps simplifying the checking process.Despite that checking the low-high property on the certificate is straightforward, and easily implemented in linear time, linking the low-high property back to the immediate dominance relation (via the concepts of strongly independent spanning trees) remains quite involved.As a matter of fact, to date, these two recent, sophisticated algorithms are still out of the reach of mechanized developments.
In our context of verified compilation, we need two things: compute efficiently the dominance relation, and represent this relation in a compact way, so that the dominance test can be implemented efficiently.Note however that mechanically verifying (or validating) the dominator tree remains, for the time being, unessential.Hence, we believe that the technique used in GCC and LLVM, i.e. computing a dominator tree using Lengauer-Tarjan's algorithm, and then fast-checking dominance with an ancestor test in the DFS numbering of the dominator tree, provides a, perhaps more modest, yet viable, trade-off between efficiency and verifiability.We argue that this technique can also be applied to verified compilers, by relying on an a posteriori validation approach.We present a formally verified validator of untrusted dominator trees, on top of which we implement and prove a fast dominance test that follows these principles.
Contributions.After recalling the technical background on dominators and the main algorithms (Section 2), we present the following contributions.
-A new, simple and verified validator for the dominance relation (Section 3), which leads to a formally verified implementation of a dominance test technique used in production compilers.The heart of the validator algorithm is our own contribution but it is mixed with well-known graph algorithms for fast ancestor checking.This paper presents, to our knowledge, the first verification of these kinds of techniques.-Empirical evidence that this technique allows, in practice, a non-negligible performance gain, even in the context of verified compilers (Section 4).-The integration of this dominance computation and dominance test within the CompCertSSA verified compiler.Our formal development and experiments are available online at http://www.irisa.fr/celtique/ext/ssa_dom/.
Technical Background and Overview of Algorithms
In this section, we recall the technical background on dominance, together with standard techniques to compute this relation.We also present how dominance and dominance testing is implemented in modern compilers.
Definitions
A control flow graph G = (N, E, e) is defined as an oriented graph, i.e. a set of nodes N , a set of edges E, and a distinguished entry node e ∈ N (that is not the successor of any other node).In the following, we depict an edge connecting node i ∈ N to j ∈ N by i → j.
Definition 1 (Dominance relation).
A node d dominates a node n if n is reachable from the graph entry node and if any path from the entry point to n contains d.If d = n, the dominance is said to be strict.
For every node n (except the entry e), the set sdom(n) of nodes that strictly dominate n, contains a node idom(n) that is dominated by every other nodes in sdom(n) [12]: the immediate dominator.As an extra important corollary, the immediate dominance relation can be represented as a tree [12].
Definition 2 (Dominator tree).The dominator tree of a CFG is a tree whose nodes are the nodes of the CFG, and where the children of a node are all the nodes that it immediately dominates.Figure 1 shows an example of a CFG and its dominator tree.For instance, in the CFG, node 10 is dominated by node 15, since all paths from the entry node 17 to node 10 must go through 15.Hence, in the dominator tree, node 10 must have node 15 as an ancestor.However, 15 is not the immediate dominator of node 10, it is node 13: indeed, in the set sdom(10) = {17, 16, 15, 14, 13}, node 13 is the one dominated by every other node in the set.Hence, node 10 is a child of node 13 in the dominator tree.Center: its dominator tree (where, if i has j as a child, then i is the immediate dominator of j).Right: dominator tree (solid arrows), annotated with a DFS traversal (dotted arrows), and its corresponding DFS intervals (see section 2.3).
Standard Techniques for Computing Dominance
Allen-Cocke (AC) standard data-flow analysis [1].The AC algorithm is based on the following fixpoint characterization of dominance.
Intuitively, it captures that every strict dominator of a node n must also dominate every n's predecessors in the CFG.Such fixpoint equation can be solved using a workset fixpoint iteration à la Kildall.As is typical for forward data-flow problems, the fixpoint resolution is speeded up if at each workset iteration we choose the node with the lowest rank with respect to a reverse postorder ordering on the CFG (a node is visited before any of its successor nodes has been visited, except when the successor is reached by a back edge).A direct implementation is quadratic in the number of nodes, or O(|N | 2 ).
Cooper-Harvey-Kennedy (CHK) algorithm [4].The CHK dominance computation improves Allen-Cocke data-flow approach using the following properties.First, dominator sets can be characterized by the immediate dominator table.
CHK can be understood as a variation of the previous approach where dominance sets are implicitly represented by the immediate dominator tree.Using reverse postorder ordering, by noticing that ∀n, n ≺ rpo idom(n), set intersection can be performed in an efficient way because if dom(a) ∩ dom(b) = ∅, then the resulting set is a prefix of both dom(a) and dom(b) [4].This algorithm performs better in practice than AC, but follows the same O(|N | 2 ) asymptotic time complexity.
Lengauer-Tarjan (LT) algorithm [12].Modern compilers implement dominance using the LT algorithm.It uses depth-first search and union-find data structures to achieve an asymptotic complexity of O(|N |log|N |+|E|).It relies on the subtle notion of semi-dominator which provides a convenient intermediate step in the dominators computation.An amortized quasi-linear complexity can be obtained using path compression but it does not seem to be implemented in practice.
Modern Implementation of Dominance Test in Compilers
As explained above, modern compilers such as GCC or LLVM implement dominance following the LT algorithm.Once they obtain a dominator tree (as shown in Figure 1), they pre-process it to obtain a constant-time dominance test.The dominance between two nodes d and n can be determined by testing if the node d is an ancestor of n in the dominator tree.For instance, in Figure 1, node 15 dominates node 10 because there is an upward path from 10 to 15 in the tree.This test can be performed in constant time thanks to a linear pre-computation (on the |N | − 1 edges of the dominator tree).For each node, one computes a depth-first search interval where d(n) is the discovery time of node n during the traversal (the first time n exists in the DFS stack) and f (n) is the finishing time (the time where all sons of n have been processed) [17].In a direct acyclic graph, d is an ancestor of n if and only if I(n) ⊆ I(d). Figure 1 shows, on the right, the results of such an interval computation: intervals bounds are determined according to the starting and ending time clocks when depth-first traversing the tree.There, the fact that 15 dominates node 10 is obtained by observing that interval I(10) = [21, 22] is included in I(15) = [2,31].
As a result of this pre-computation with complexity O(|N |log|N | + |E|), a constant time dominance can be obtained by storing the intervals information in adequate data structures.
Validator and Proof of Dominance Test
Our formalization is done on top of an abstract notion of CFG.Such a graph is defined as follows by an entry node and a set of edges.
In the sequel, reached : node→Prop is a predicate characterizing the set of nodes that are reachable, via cfg, from the node entry, and dom : node→node→Prop denotes the dominance relation that is defined using a standard definition of CFG paths.
In this section, we assume that an external tool computes a list dt_edges that contains the reversed edges of the candidate dominator tree (i.e. the pair (i,j) represents that, in the candidate dominator tree, i is a child of j, or that j immediately dominates i).
We then validate this list and build a dominance test, implemented by the function test_dom : node→node→bool that satisfies the following theorem: forall i j, reached j → test_dom i j = true → dom i j.
In the rest of this section, we proceed in three steps.First, we give a dominator map D : node→node (extracted from dt_edges), a specification that entails dominance.Then, we provide an efficient procedure to test whether a node is a descendant of another in the dominator tree (encoded morally in D).This procedure is used twice: for checking that D meets its specification, and in the final implementation of the dominance test, test_dom.
Validation of Dominator Tree
In this section, we assume a dominator map D:node→node that provides an (immediate) dominator candidate for each node.We will explain in Section 3.4 how we build D from the list dt_edges.We provide a formal specification for D and prove it entails dominance.Note that we do not prove that it implies immediate domination, as this is not required in our final soundness theorem 3 .
The specification, inspired from Equations ( 1) and ( 2), is defined as follows.
We then prove, quite straightforwardly, that D_spec implies dominance by induction on the definition of predicate reached.
Hence, we can validate the map D if we manage to check that it satisfies the specification D_spec.Interestingly, we need an executable version of the Dstar relation for two distinct usages.First we want to validate D_spec on D. Second, we want to implement a dominance test using Dstar.
Ancestor Test in the Dominator Tree
In this section we assume an acyclic oriented graph4 , defined by an entry node and a map, sons, from nodes to the list of their successors5 .We will later relate this graph with our dominator tree.
(* adjacency map *) As outlined in Section 2.3, the ancestor test consists in performing a depthfirst traversal of the graph, starting from entry, and using a traversal clock, that increases each time a node is encountered (by visiting it or by marking it).We compute for each node n, an interval where d(n) is the value of the clock when node n was first encountered, and f (n) is the value of the clock when all successors of n have been processed.If the graph is acyclic and each node is reachable from entry, we can use these intervals to perform efficient ancestor tests [17]: there exists a path from n to m in the graph if and only if I(m) ⊆ I(n).For our purpose, we only need to prove that this condition is sufficient.We define intervals, intervals inclusion and our efficient ancestor test in an interval map as follows. 6ecord itv := { pre: Z; post: Z }.Now, to state the correctness of our interval computation, we specify a notion of ancestor called InSubTree.A node r is an ancestor of n (or equivalently n belongs to a subtree whose root is r) if n = r or there exists a successor s of n such that s is an ancestor of n.
The interval map is computed by the function build_itv that performs the recursive DFS traversal of the graph, accumulating in a record of type state, the current interval map, and the current time clock.Note that, to ensure termination of build_itv_rec, we use a fuel auxiliary argument, i.e. a natural number counter decreasing at each recursive call.The fuel argument is useful not only to avoid proving termination, but also, and more crucially, to get a useful induction principle on the next inductive predicate.The correctness theorem of build_itv states that, in the resulting interval map st.itvm, interval inclusion implies an ancestor relationship in the tree.
As can be seen, this theorem is proved under the hypothesis that the graph is well-formed, namely that it does not contain duplicates or crossing edges, as expressed by predicate NoRepetTreeN, whose formal definition is the following.The definition of NoRepetTreeN is staged, i.e. indexed by a natural number.This level in the definition (that coincides with the height of the tree under consideration) provides a nice induction principle when combined with the fuel argument of function build_itv.Without such a trick, Coq does not generate a useful induction principle.
We prove build_itv_rec correctness using several auxiliary invariants, notably that the clocks are monotonic, that computed intervals are never empty, and that in a given subtree, computed intervals are included in the interval of the root of the subtree.
Well-formed Graph Construction
This section explains how we relate the list dt_edges that contains the edges of the dominator tree, with the immediate dominator map D we use in Section 3.1, and the graph representation used in Section 3.2.We not only build a map of successors, but also check sufficient conditions enforcing the NoRepetTreeN property presented previously.
Starting from the list dt_edges, we straightforwardly build a map D from nodes to their immediate dominator candidate with the function make_D_fun of type make_D_fun (dt_edges:list (node*node)) : node → node.If a node is not in the association list dt_edges, its (correct) immediate dominator is set to itself.
In a similar way to the construction of the candidate dominator tree from dt_edges, we also define the function build_succs of type build_succs (dt_edges:list (node * node)): option (PTree.t(list node)) that performs a reverse topological sort to build a map that associates to each node the (candidate) list of immediately dominated nodes.Function build_succs somewhat builds the inverse of the map D. Its correctness theorem states that the output successor tree, if any, is well-formed.
This theorem follows from the checks performed during the computation of build_succs.Indeed, in its signature, the option type of the result represents a validation failure.During the traversal of dt_edges, we check that it contains no edge of the form (n,n), and that, when processing an edge (n,d), i.e. adding n to the list of successors of node d, node d was already seen (i.e. is already a key in the tree), and that node n has not yet been seen.Hence, to be accepted by the validator, the provided list dt_edges should be topologically sorted, and by the same validation, we ensure there is no loop in the graph.For further details, we refer the reader to the formal development available online.
Final Construction
The final dominance test computation is given in Figure 2. It takes as input a program represented by its CFG (more precisely any function of this program) and combines the various functions presented earlier.It is proved correct with the following theorem.
We now discuss its asymptotic complexity.If N denotes the number of nodes in the CFG, and E the number of edges, then the asymptotic complexity of this computation is as follows.
-The list dt_edges has length N − 1 (every node, except the entry, has a unique immediate dominator).
-The map make_D_fun dt_edges is computed with 1 traversal of dt_edges and 1 map update is performed at each step.The overall complexity is O(N log N ).-build_succs is computed with one traversal of dt_edges and several set and map updates are performed at each step.The overall complexity is O(N log N ).-Intervals are built with a traversal of a graph with N nodes and N − 1 edges (this is a tree).At each step, some map updates are performed.The overall complexity is O(N log N ).-One ancestor test requires two map lookups and some integer comparisons.
Each integer 7 is between 0 and 2N − 1.The overall complexity of an ancestor test is O(log(N )).-Dominance tree validation requires, for each edge in the CFG, one ancestor test and some map lookups.The overall complexity of this step is O(E log(N )).
Overall, the dominance tree pre-computation follows an asymptotic complexity of 3O(N log(N )) + O(E log(N )) = O(E log(N )) (N ≤ E as all nodes are reachable from the entry) and the generated dominance test requires O(log(N )) computations.
As will be explained in the next section, we also provide a native version of the implementation, that uses native integers for graph nodes and interval bounds.It does not improve the asymptotic time complexity of the whole dominance test construction, but it enables a constant time dominance test since interval lookup is as fast as an array access and interval test inclusion requires four comparisons between native integers.
Conclusion and Perspectives
We have described a new verified validator for the dominance relation.It is able to validate the state-of-the-art dominance construction by Lengauer and Tarjan combined with an ancestor test in the dominator tree candidate.This technique, borrowed from (un-verified) production compilers like GCC and LLVM, brings an important speedup compared to previous verified algorithms [19,2].Using native data-structures after extraction, it builds a constant-time dominance test similar, in terms of efficiency, to the non-verified test.
In terms of program optimization, this dominance test already provides a strong support (i.e.we are able to perform efficient dominance test on the CFG on demand), and we already leverage this tool in our GVN-based CSE.This important building block could help us implement other powerful optimizations such as loop invariant code motion.However, the most efficient implementation of natural loops detection rely on iteration startegies on the dominator tree itself.In this case, the dominance checking is no longer sufficient, and one may have to investigate the mechanized verification of certifying algorithms for the dominator tree, such as the linear-time certifying algorithm by Georgiadis et al. [9].
Figure 1 .
Figure1.Left: example CFG, with entry point 17, and nodes ordered in reverse postorder (on the left).Center: its dominator tree (where, if i has j as a child, then i is the immediate dominator of j).Right: dominator tree (solid arrows), annotated with a DFS traversal (dotted arrows), and its corresponding DFS intervals (see section 2.3).
Figure 4 .
Figure 4. Total building (top) and using (bottom) time overheads, relatively to the I-DT-NATIVE implementation, classified by programs. | 5,679.8 | 2015-08-24T00:00:00.000 | [
"Computer Science"
] |
OQPSK Synchronization Parameter Estimation Based on Burst Signal Detection
: The fast estimation of synchronization parameters plays an extremely important role in the demodulation of burst signals. In order to solve the problem of high computational complexity in the implementation of traditional algorithms, a synchronization parameter (frequency offset, phase offset, and timing error) estimation algorithm based on Offset Quadrature Phase Shift Keying (OQPSK) burst signal detection is proposed in this article. We first use the Data-Aided (DA) method to detect where the burst signal begins by taking the segment correlation between the receiving signals and the known local Unique Word (UW). In the sequel, the above results are adopted directly to estimate the synchronization parameters, which is obviously different from the conventional algorithms. In this way, the complexity of the proposed algorithm is greatly reduced, and it is more convenient for hardware implementation. The simulation results show that the proposed algorithm has high accuracy and can track the Modified Cramer–Rao Bound (MCRB) closely.
Introduction
The importance of satellite communication is not only reflected in the ordinary communication transmission, but it also has a profound impact on production safety and economic development. With the commercialization of the Fifth-Generation (5G) mobile networks in recent years, the integration of satellite communication and ground 5G has become a new hot topic in the industry. Therefore, satellite communication has ushered in a new round of development upsurge in the world, and the development of its technology has also been paid much attention [1].
As a common communication system in satellites, Time Division Multiple Access (TDMA) shares the transmission medium or network, which is conducive to the transmission of burst data. TDMA works by allowing multiple users to use the same frequency in different time slots, where each user occupies one particular slot. Therefore, TDMA has the advantages of high signal quality, good security, and large system capacity [2].
Offset Quadrature Phase Shift Keying (OQPSK) as a modulation method is widely used in satellite communication because of its constant envelope characteristics, good spectral efficiency, and power efficiency. The OQPSK signal is an improved version of the Quadrature Phase Shift Keying (QPSK) signal, which can be obtained by staggering the code streams of the in-phase component and quadrature component of the QPSK signal by half a symbol period in time, such that its phase can only jump 90 degrees at most.
In satellite communications, due to the burstiness and shortness of the burst signal and the huge influence of synchronization parameters on the performance of signal demodulation [3], it is very important to quickly and accurately capture the starting position of the burst signal and estimate its synchronization parameters. In addition, because of the limitation of hardware resources in practical application, the complexity of the algorithm is required to be as low as possible to reduce resource consumption. On the other hand, the accuracy of the algorithm should be guaranteed to meet the actual needs.
Related Work
The improved energy method given in [4] achieves a high detection rate, but it uses the center of the energy window as the starting point of the burst signal, so there is a certain deviation from the true starting point. Cross-correlation is then used in the earliest burst signal detection based on the Data-Aided (DA) method; however, its performance is poor in the presence of frequency offset [5]. The double correlation [6] can effectively eliminate the influence brought by frequency offset. Unfortunately, it contains many conjugate multiplications, resulting in high computational complexity. In [7], a segment correlation algorithm was introduced to greatly reduce the complexity at the expense of part of the anti-frequency offset ability. The burst signal detection adopted in this article is further simplified on the basis of segment correlation.
Once the burst signal detection is done, we need to estimate the synchronization parameters. Synchronization includes carrier synchronization and timing synchronization, in which carrier synchronization includes frequency offset synchronization and phase offset synchronization. We should note that for burst signals, the synchronization parameter estimation often adopts the feedforward algorithm, so the method based on the phase-locked loop [8] is not applicable here. A code-assisted carrier synchronization algorithm was proposed in [9], but this method requires the use of channel coding, and the iterative concept makes its implementation more complicated. In [10], a low-complexity carrier synchronization method was designed under the assumption of a special frame structure. The L&W, KAY, L&R, and Fitz [11][12][13][14] algorithms are commonly used to feedforward estimate the frequency offset. However, the accuracy and estimation range of these algorithms cannot be well coordinated. More specifically, L&W and KAY have a wide estimation range, but low accuracy, while L&R and Fitz have a high accuracy, but a narrow estimation range, and their complexities increase with the improvement of the estimation accuracy. The phase offset estimation is usually performed after the frequency offset is estimated. Finally, the received signal needs to be compensated for the frequency offset based on the above results, and then, the Maximum-Likelihood (ML) principle is applied to estimate the phase offset.
For timing error estimation, the commonly used feedforward algorithms are the Lee, AVN, and Zhu [15][16][17] algorithms; nevertheless, their performances rely heavily on the roll-off factor, and they cannot be directly applied to OQPSK signals. In [18] (in the simulation, we call the algorithm Ye), a DA-based feedforward timing error estimation algorithm using trigonometric interpolation was proposed, but this algorithm is disabled for anti-frequency offset.
Our Contribution
In this paper, we propose an OQPSK synchronization parameter estimation algorithm based on burst signal detection in satellite communications. The main contributions of this paper are the following: (1) We exploit a simplified segment correlation algorithm in burst signal detection, where the sequential results of segment correlation are then efficiently used to estimate the synchronization parameters. We should note that there is no correlation between burst the signal detection and synchronization parameter estimation of traditional algorithms. To this degree, the proposed algorithm has significantly low complexity.
(2) For frequency offset estimation, the proposed algorithm can achieve high estimation accuracy and is relatively robust to the estimation range.
(3) For timing error estimation, the proposed algorithm is almost not affected by the roll-off factor, and it also has the ability of anti-frequency offset.
(4) The simulation results show that the algorithm has high accuracy, and its performance is close to the Modified Cramer-Rao Bound (MCRB).
The rest of this article is arranged as follows: Section 2 is the algorithm description; Section 3 is the performance evaluation; and Section 4 is the conclusions.
Algorithm Description
The flowchart of this article is shown in Figure 1, where we first detect the burst signal to find the frame header of the received signal. In the sequel, we exploit the above correlation value to estimate the frequency offset, which is compensated for the true carrier frequency and closely followed by the phase offset estimation part. Once again, we apply the interpolation method to the correlation peak in the frame header detection to obtain the timing error.
Burst Signal Detection
Without considering the timing error, the received signal model can be represented as: where e jθ is the M-ary phase-modulated symbol, n(k) denotes Additive White Gaussian Noise (AWGN), T stands for the symbol period, and f d and θ 0 represent the frequency offset and phase offset, respectively. Applying double correlation between r(k) in Equation (1) and the known local Unique Word (UW) c(k) yields: where υ is the moment of judgment. In Equation (2), ∑ N−1 k=i r(υ + k)c(k) * r * (υ + k − i)c(k − i) is the double correlation term, and ∑ υ+N−1 k=υ+i |r(k)||r(k − i)| is the random data correction term. Usually, the random data correction term can be omitted [6], such that Equation (2) can be approximated as: The reason we chose double correlation in Equations (2) and (3) is that cross-correlation is dysfunctional when frequency offset exists. Unfortunately, this brings unacceptable complexity and makes its implementation in hardware impossible. In order to reduce the computational complexity, the double correlation method can be improved by the concept of segments [7]; that is: where N and L are the lengths of the local UW and each segment of UW, respectively. The segment correlation takes the conjugate multiplication between different segments to eliminate the influence of frequency offset. It can be seen from Equations (4) and (5) that the complexity of L 2 (µ) is greatly reduced compared to that of L 1 (µ).
Normally, the received signal should be up-sampled, but when the local UW is correlated with the received signal, we can choose whether the local UW is up-sampled or not. In the simulation, we find that using local UW without up-sampling in QPSK modulation yields acceptable performance, but this does not work for OQPSK modulation. This is because OQPSK staggers the in-phase component and quadrature component by half a symbol period, and the local UW will lose the information of the half symbol if the local UW is not up-sampled. Obviously, the solution to that is to up-sample the local UW and stagger its in-phase component and quadrature component by half a symbol period.
We find it a convenient approximation of Equation (5) to only consider the case of j = 1. This is because the correlation of the segment correlation algorithm is mainly dominated by the adjacent segments (j = 1 case). Therefore, Equation (5) can be written as follows: Undoubtedly, L 3 (υ) has lower complexity, but results in extra performance loss. Fortunately, our simulation shows that this loss is acceptable. More in detail, the complexity comparison between L 1 (υ), L 2 (υ), and L 3 (υ) is explained in Table 1.
Multiplication
Addition
Synchronization Parameter Estimation
Once the frame header of the burst signal is detected, the following work will be the synchronization parameter (frequency offset, phase offset, and timing error) estimation.
Frequency Offset Estimation
We start our derivation of the estimation of frequency offset in Equation (4). Specifically, the term: In order to facilitate the derivation, we temporarily ignore the influence of the noise term n (k) and define f 2π f d T. Equation (7) can be written as follows: As we mentioned earlier, the value of υ indicates the judgment time, which only represents the starting position of the burst signal and does not affect the parameter estimation. Thus, we can set take the value of υ to zero. Then, incorporating Equation (8) into Equation (4): In order to obtain L 3 (υ) in Equation (6), we first simplify x i x * i−1 : where: Equation (12) implies that the value of A is a real number, and accordingly, it will not affect the result of the complex angle of Equation (11). Obviously, the following results can be obtained: The result of Equation (13) is in an ideal case, that is no noise is considered. Unfortunately, noise has a non-negligible effect on the frequency offset estimation. In order to improve the performance of the estimation, we accumulate the value of x i x * i−1 to smooth the effect of noise. We should also note that in order to facilitate the derivation, we defined f 2π f d T above. Consequently, the final frequency offset can be calculated as: Equation (14) indicates that the frequency offset estimation range is inversely proportional to L, and its estimation accuracy is directly proportional to it. It is particularly important to note that ∑ (14) equals Equation (6) after removing the modulus. Therefore, the burst signal detection and parameter estimation algorithm are connected here. In this case, we can directly use the results of burst signal detection to estimate the frequency offset, which is why the complexity of the algorithm in this article is greatly reduced. Table 2 is a comparison of the complexity of the frequency offset estimation algorithms. N 0 in L&R and Fitz is usually N/2.
Multiplication
Addition Complex Angle This article 1 0 1
Phase Offset Estimation
We perform phase offset estimation after frequency offset estimation is completed. The first step of phase offset estimation is to compensate for the frequency offset in the frame header: (15) where y(k) is the frame header after removing the frequency offset and n (k) = n(k) * exp(−j2πf d kT) is the noise item. According to the principle of ML, the DA phase offset estimation algorithm can be obtained as [19]:
Timing Error Estimation
After the burst signal detection is completed, in addition to estimating the above carrier synchronization parameters, it is also necessary to estimate the timing error. The signal model with timing error is: where {a(l)} represents the independent and identically distributed information sequence and g(t) and τ denote the baseband shaping function and timing error, respectively. When the received signal is sampled at t = kT +τ, the sampled data r(kT +τ) can be obtained. The ML-based data-aided timing error estimation formula given in [20] is as follows: If removing the square, Equation (18) is accurate for the burst signal detection based on cross-correlation. The process of capturing the frame header is the process of searching for correlation peaks. When Equation (18) As mentioned above, the cross-correlation algorithm has the fatal disadvantage of no anti-frequency offset. Fortunately, we found that the segment correlation used in this article is also the process of capturing the frame header, and its maximum value position is the same as Equation (18), that is when the local UW is completely aligned with the UW in the received signal. Therefore, the cross-correlation peak in Equation (19) can be replaced by the segment correlation peak in Equation (6), and the timing error estimation algorithm at this time obtains the ability of anti-frequency offset. Thus, burst signal detection and timing error estimation are linked via: The interpolation method is usually adopted to calculate the maximum value of Equation (20); here, we chose a simple and highly accurate trigonometric interpolation method [21]. It should be noted that the number of points used must be an even number. In this article, there were four points employed for interpolation, and each symbol took two points; thus, a total of two symbols were required. The four-point interpolation formula given in [21] is as follows: Among them: To find the timing error, we only need to find out what the value of µ is to maximize z(µ):μ = arg max µ Re(C 0 + 2C 1 e jπµ/2 + C 2 e jπµ ) 2 (23) Therefore, finding the timing error is transformed into the problem of finding the maximum value of Re(C 0 + 2C 1 e jπµ/2 + C 2 e jπµ ). We define: For the convenience of calculation, we define C 1 A + Bj, where A = z(0) − z(2), B = −z(1) + z(−1). According to the Euler formula, e jπµ/2 = cos πµ 2 + j sin πµ 2 , e jπµ = cos πµ + j sin πµ can be obtained; thus, Equation (24) can be rewritten as: The derivative of the above equation: Take zero for the above equation, then solve it; we can obtain: According to Equation (27), the corresponding timing error can be calculated as: where P is the number of sampling points used for each symbol during interpolation, and P = 2 in this article. The value of τ cannot be calculated by Equation (28) because the equation is already in its simplest form; thus, we have to do approximate processing. The four points adopted in the interpolation are approximately symmetrical, in which the values of z(0) and z(1) in the middle are larger and the values of z(−1) and z(2) on both sides are smaller. Therefore, we can get a larger A and a smaller C 2 , and further, 2C 2 A sin πτ can be ignored:τ Although we obtain Equation (29) by ignoring 2C 2 A sin πτ, simulation indicates that it still reaches high accuracy.
Performance Simulation of Burst Signal Detection
Tables 3 and 4 are the comparisons of the false alarm rate and detection rate of OQPSK burst signal detection under local UW being up-sampled four times and not up-sampled in the case of Equation (5). Simulation conditions: in the case of being up-sampled, the added normalized frequency offset is 0.04 (relative to the symbol rate and the same below); the total length of UW after being up-sampled four times is N = 108; the number of segments is M = 9; and the local UW length of each segment is L = N/M = 12; in the case of not being up-sampled, the added normalized frequency offset is 0.04, N = 27, M = 9, and L = N/M = 3. We performed 10,000 simulations. From Tables 3 and 4, it can be seen that the performance of the local UW when it is up-sampled four times is much better than not being up-sampled. This is because there are more points after being up-sampled, so the value of the correlation result is larger, but the more important reason is that the OQPSK signal can stagger its in-phase component and quadrature component by half a symbol period only after being up-sampled; otherwise, the partial correlation will be lost. Tables 5 and 6 are the comparisons of the false alarm rate and detection rate, respectively, of the segment correlation algorithm before and after the simplification. Simulation conditions: the added normalized frequency offset is 0.04, N = 108, M = 9, and L = 12. We performed 10,000 simulations. Tables 5 and 6 indicate that the performance of the simplified segment correlation algorithm is slightly lower than that before the simplification, but the decrease is not significant. This is because the correlation of the segment correlation algorithm is mainly concentrated between adjacent segments. When j = 1, the correlation is maximum, and the correlation decreases with the increase of j. Therefore, this paper adopts the simplified algorithm, that is the case where the value of j is only one, which can be exchanged for a further reduction in the amount of calculation at the cost of a small performance loss. Table 5. Comparison of the false alarm rate between L 2 (µ) and L 3 (µ). To sum up, for the burst signal detection algorithm, on the one hand, due to the characteristics of the OQPSK signal, the local UW must be up-sampled to ensure its detection performance. On the other hand, in order to reduce the complexity of the algorithm, we further simplify the segment correlation algorithm. Of course, the performance of the algorithm will decrease after simplification, but fortunately, the decrease is not obvious. Figure 2 shows the Mean Squared Error (MSE) comparison of different frequency offset estimation algorithms. Simulation conditions: the added normalized frequency offset is 0.02; N = 108; and the segment length L of the algorithm in this article is 18, 12, 9, and 6, respectively. The figure implies that the L&R and Fitz algorithms have the highest estimation accuracy, and their performance is very close to the MCRB, while the L&W algorithm has the worst performance. The Kay algorithm only gradually gets better in the case of a high SNR. The performance of the algorithm in this article is proportional to the length of L. We can see that when L = 18, the accuracy of the algorithm in this article is very close to that of the L&R and Fitz algorithms and is far better than the L&W algorithm. In addition, when the SNR is lower than 12 dB, its performance is also better than the KAY algorithm. Figure 3 shows the comparison of the estimation range of the algorithm in this article and several frequency offset algorithms with high accuracy. Simulation conditions: the added normalized frequency offset is [−0.4:0.4], and its step size is 0.001; N = 108; and the segment length L of the algorithm in this article is 18, 12, 9, and 6, respectively. It can be seen from the figure that under the current simulation conditions, the estimation ranges of Fitz and L&R are (−0.035:0.035) and (−0.07:0.07), respectively. In practical applications, the range of the estimated frequency offset should be at least (−0.1:0.1); thus, the L&R and Fitz algorithms become invalid in the presence of large frequency offset. While the estimation range of the algorithm in this article can still reach (−0.11,0.11) even in the case of the highest accuracy (L = 18). In the following simulations, we select L = 12, since the L = 12 case is more robust to frequency offset (−0.16:0.16), and there is not much difference in the MSE performance between L = 12 and L = 18.
Performance Simulation of Frequency Offset Estimation
In short, as long as the appropriate L is selected, the frequency offset estimation algorithm in this paper not only has an accuracy close to the L&R and Fitz algorithms, but it also has a much larger estimation range. In addition, the frequency offset estimation algorithm is calculated based on the correlation results generated during the burst signal detection process, so its complexity drops significantly. Figure 4 shows the simulation of phase offset estimation accuracy. Simulation conditions: N = 108; the added phase offsets are − 3 4 π, − 1 2 π, − 1 4 π, 0, 1 4 π, 1 2 π, and 3 4 π. From the figure, we can see that the MSE of the algorithm closely tracks the MCRB, so its performance is excellent. Figure 5 shows the effect of different UW lengths on the phase offset estimation performance. Simulation conditions: the added phase offset is 1 4 π, and the UW lengths used are N(N = 108), 3N/4, N/2, and N/4. From the figure, we can see that the performance of the algorithm is proportional to the local UW length used.
Performance Simulation of Phase Offset Estimation
In summary, the ML-based phase offset estimation algorithm has better estimation performance, and its performance is directly proportional to the UW length. In actual projects, as long as the performance can meet the needs, the length of the local UW used can be appropriately reduced. For example, when the length of the local UW used in this article takes N/2, its accuracy still meets the requirements, and its complexity is reduced by half. Figure 5. Accuracy of the phase offset estimation algorithm under different UW lengths. Figure 6 shows the estimation accuracy of OQPSK signals under different timing estimation algorithms. Simulation conditions: the roll-off factor is set to 0.35, N = 108; L = 12; and the frequency offset is zero. It can be seen from the figure that the Lee, Wang, and AVN [15][16][17] algorithms are completely invalid at this time, and only the suggested algorithm and the Ye algorithm [18] can be used. We note that to make the system suitable for the proposed algorithm and the Ye algorithm, we especially emphasize the up-sampling of local UW and staggering the in-phase component and quadrature component by half a symbol period in Section 2.1. Since Lee, Wang, and AVN do not work for OQPSK, in order to carry out a fair comparison, we also test OPSK signals in the following. Figure 7 shows the accuracy of the timing error estimation algorithms under different roll-off factors. Simulation conditions: adopt the QPSK signal; N = 108; L = 12; the frequency offset is zero. It can be seen from the figure that the performance of the Lee, Wang, and AVN algorithms is greatly affected by the roll-off factor, while the proposed algorithm and Ye algorithm are not sensitive to the roll-off factor. In addition, we can also see that the MSE performance of the proposed algorithm is slightly better than that of Ye and much better than that of Lee, Wang, and AVN. If the roll-off factor is set to 0.35, a normalized frequency offset of 0.05 is added, and the other conditions remain the same as those in Figure 7, then we can get the simulation in Figure 8. Figure 8 implies that with the existence of frequency offset, the Ye algorithm fails, while other algorithms are not affected.
Performance Simulation of Timing Error Estimation
From the above comparison, we can draw a conclusion: firstly, for the traditional Lee, Wang, and AVN algorithms, the performance is not only greatly affected by the roll-off factor, but also, they cannot be used on OQPSK signals. Secondly, although Ye solved these problems, it lost the ability of anti-frequency offset.
As we mentioned in Section 3.2.1, the anti-frequency offset ability in practical applications must be at least within the range of (−0.1:0.1). Figure 9 shows the anti-frequency offset ability of the algorithm in this article when it is used on OQPSK signals. Simulation conditions: N = 108; M = 9; L = 12; the roll-off factor is 0.35; the added normalized frequency offsets is [−0.1:0.1] in steps of 0.025. It can be seen from the figure that as the frequency offset becomes larger, the estimation accuracy will slightly decrease, but overall, the difference is not large. Therefore, the timing algorithm in this article has good anti-frequency offset ability, and its range can fully meet the needs of the actual situation. All in all, the timing estimation algorithm in this article not only has high accuracy, but also, its performance is not affected by the roll-off factor; it also has a good ability of antifrequency offset. In addition, the value used in the interpolation of the timing estimation algorithm is generated during burst signal detection. According to Equation (29), it can be seen that the result can be achieved with only a very simple calculation.
Conclusions
In order to reduce the complexity of algorithm implementation, we propose an OQPSK synchronization parameter estimation algorithm based on burst signal detection. First of all, the burst signal detection is performed by adopting a simplified segment correlation algorithm, which greatly reduces the amount of calculation with a very small performance loss. After completing the burst signal detection, on the one hand, we use the results generated during the burst signal detection to estimate the frequency offset and then estimate the phase offset. On the other hand, the results of burst signal detection are also used to estimate the timing error. The simulation results show that the algorithm in this article not only greatly reduces the amount of calculation, but still retains good performance. More precisely, firstly, the estimation accuracy of the frequency offset estimation algorithm in this article is close to the L&R and Fitz algorithms with a much wider estimation range; secondly, the timing estimation algorithm keeps high accuracy with excellent anti-frequency offset ability and is less sensitive to the change of the roll-off factor.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 6,252.4 | 2021-01-02T00:00:00.000 | [
"Computer Science"
] |
Teachers’ Continuous Professional Development through Lesson Study: Voices of Vietnamese EFL Teachers at University
This study investigated how teachers perceived the benefits and difficulties of implementing a Lesson Study (LS) in the context of English language teaching at a university in Vietnam. The data were collected from six teachers through semi-structured interviews. To analyze the data, two addressing themes (teachers’ perceived benefits and teachers’ perceived difficulties) were focused on while implementing LS. The results generally showed that the teachers believed that LS brought them three benefits: encouraging teachers’ exchange activities and helping them deepen knowledge of lesson content, understanding students’ learning activities and working processes, and promoting teachers’ motivation in pursuing their continuous professional development (CPD). Regarding teachers’ perceived difficulties in LS implementation, they reported lacking confidence in actively implementing LS activities. Additionally, they had difficulties in time management and encountered problems reaching a consensus for joint work due to the influence of muti-faceted aspects. Accordingly, the study provides pedagogical implications for related stakeholders (teachers, students, and policymakers) regarding LS implementation issues contributing to the success of LS implementation in Vietnam and other similar contexts.
eachers' continuous professional development (CPD) can be defined as a lifelong learning process that starts from the very beginning of teachers' teaching careers and continues until they stop or retire (Villegas-Reimers, 2003).CPD has become an influential norm in educational reforms, and it helps teachers to facilitate learning activities through which they can learn to adapt to new changing roles and teaching approaches (Mon et al., 2016).
Numerous empirical research has looked into the relationship between teachers' CPD and their teaching practices, which found that CPD has played a significant role in guiding teachers' profession (Alamri et al., 2018).Teachers' CPD is crucial to affecting teachers' beliefs and needs, which can change their practices to fit with a particular context (Tran & Pasura, 2021).Hwang (2021) states that teachers' CPD not only helps them learn modern tools and instructional approaches but also knows how to apply them in their practical-based teaching classrooms.Similarly, Mizell (2010) argued that CPD activities that assist teachers in deciding instructional strategies would best support students' learning.
It is not easy to successfully implement CPD programs for teachers.Guskey (2002) and Forrest (2018) stated that most forms of CPD for teachers are seen to be top-down and too isolated from classroom-based practices.Research shows that most activities for teachers' CPD are perceived as less relevant or even unconnected to classroom-based teaching practices (Elmore, 2002;Margolis et al., 2017).This trend is particularly true in the Vietnamese context, where most CPD activities normally take the form of one-shot training attempts and focus much on the what, not the how of CPD (Tran, 2016).Thus, the effects of such CPD activities are often undervalued.Needless to say, seeking a possible solution to the CPD for Vietnamese teachers should be a must.
Lesson Study (LS) appears as an innovative approach focusing on cooperation between teachers in classroom-based teaching practices contributing to strengthening their CPD (Murray, 2013).Moreover, LS emphasizes the classroom as the best place for the teachers' practices resulting in the development of their CPD (Stigler & Hiebert, 2009).However, it is foreseen that implementing LS in Vietnam seems to be a big challenge due to the impacts of the hierarchical system of policy-making on educational innovations and also the influence of the Confucian Heritage Culture in which collectivism, that is the extent to which people tend to form strong, cohesive groups, has still existed (Nguyen & Jaspaert, 2021).In an attempt to investigate whether implementing LS as an instructional approach for teachers' CPD in the Vietnamese context of English language teaching at the tertiary level is possible, this qualitative study was conducted to explore how Vietnamese teachers at a higher education institution perceived the possibility of LS implementation in terms of its benefits and challenges after they had gone through a period of LS implementation.Accordingly, pedagogical implications for enhancing the opportunity for LS implementation in Vietnam as well as other similar contexts, will be suggested.
Lesson Study
Since its beginnings in the 1960s in Japan, LS has placed a strong focus on the core value of the interaction between teachers and teachers and between teachers and students in classroom-based teaching practices.To clarify, LS is a structured process where teachers collaboratively plan, observe, analyze, and refine actual classroom lessons.It involves cycles of preparing, teaching, observing, and discussing lessons to enhance educational practices.It is believed that teaching, under LS perspectives, is a highly complex process that offers teachers ample opportunities to share pedagogical insights and think deeply about the work they do in the classroom.In this respect, LS requires deep thoughts, inquiries, and collaborations with a collective focus on classroom-based practices rather than teachers themselves (Murata & Lee, 2020;Stepanek et al., 2007).Murray (2013) believed that the concept behind LS is simple, stating that enhancing the quality of teaching and learning in schools requires teachers together to study teaching and learning processes and then have sound ways to improve their practices.In this respect, LS is a potential model mainly structured for promoting teachers' CPD through specific steps and favorable conditions during the implementation process.These steps typically include planning a lesson, observing how it unfolds in a live classroom setting, and conducting postlesson discussions.This cycle is integral to LS and forms the foundation of its "complex and interactive" nature.Fernandez and Yoshida (2004) and Bocala (2015) added that through LS, the teachers observe others' teaching, then try out similar lessons themselves.This process becomes central to the effectiveness of teachers' teaching profession.
LS is a long-established teacher-led collaborative approach that aims to develop teachers' professions as well as learners' learning outcomes (Fernandez & Yoshida, 2004;Wood & Cajkler, 2018).Though LS is centered on collaborative work among the teachers, it also creates a positive learning environment for learners while involved in lessons (Lewis & Perry, 2014).Chenaul (2017) supported the idea that collaboration in LS allows teachers to share their previously learned experiences or stories of ineffective or unsuccessful teaching with their colleagues.Moreover, collaboration in LS helps support teachers' revisions in the lesson, clarify misunderstandings or misconceptions while teaching, and effectively highlight areas that need improvement for better instruction (Howell & Saye, 2016).Wood and Cajkler (2018) emphasized that although the cycle of LS procedurally looks simple, it engages the teachers in a highly complex interactive process in practice.LS is considered a cycle of instructional improvement for teachers' CPD consisting of serial steps.This cycle begins with identifying a focus for the lesson, followed by collaboratively designing the lesson, observing the lesson in action, and finally, conducting a detailed postlesson discussion.Despite the fact that different researchers have developed their cycle for implementation depending on particular characteristics of individual institutions and working cultures, the working ideas behind the cycle open up for thorough preparation, revision, and implementation (Murray, 2013).
Related Studies
There have been quite a few studies on LS in countries such as the USA, Hong Kong, and Japan.Cheung (2011) conducted an experimental study in Hong Kong to measure the impact of LS on the teaching of teachers and students here in Chinese writing.The analysis was meticulously performed with pre-mid-posttests and lesson observations to collect data.The results showed that the teachers in the experimental group worked more effectively than those in the control group through the scores achieved by learners in the two groups.Specifically, learners in the experimental group became more creative when writing Chinese than their counterparts in the control one.Marble (2007) conducted a qualitative study in the USA involving 24 pre-service teachers teaching integrated science and mathematics.Through lesson observations, debriefing protocol, and summative portfolios, the data shows that many aspects of this group of preservice teachers increased rapidly, especially their teaching capacity, self-confidence, classroom management skills, and learner assessment ability.They also significantly improved their students' engagement in the classroom.Matoba et al. (2007) carried out their research in the Japanese context.This study was conducted as a case study using quantitative and qualitative data to determine the impact of LS on teachers' teaching of Japanese, social studies, science, mathematics, and English.Specifically, data were collected from students' academic results, the number of students dropping out, evaluation forms of teachers and students about the quality of education at the school, and teachers' reflections after using LS in their CPD.Regarding the results, most teachers thought LS was a highly effective professional development tool.They created a healthy work environment and professional and practical knowledge by collaborating, sharing, and evaluating each other's lessons.In addition, the students developed their knowledge quickly due to the increased teaching ability of teachers.They found the learning environment more effective, so the number of students absent from school decreased.
Many more studies conducted in South Africa, Tanzania, Sweden, Australia, the UK, and Singapore, also show the great potential of LS (Ming Cheung & Yee Wong, 2014).Ono and Ferreira (2010) conducted a case study in South Africa, highlighting the role of lesson study in teacher CPD.The study emphasized the significance of lesson study in promoting ongoing professional growth among teachers.Similarly, Kihwele (2023) explored the factors influencing the implementation of lesson study in pre-service teacher education in Tanzania, shedding light on the importance of lesson study in supporting teachers' CPD.Moreover, highlighted the importance of linking teacher PD needs with appropriate solutions, emphasizing the role of lesson-based PD programs in enhancing teachers' understanding of implementing 21st-century skills in the classroom (Eriņa & Namsone, 2021).LS has also been recognized as a model for developing teachers' competence, as demonstrated by Setiawati et al., (2021), who conducted research on pedagogical competence in pre-service biology teachers through lesson study.
Although the implementation of LS has been reported to be advantageous worldwide, research on the possibility of LS implementation in Vietnam is still underrepresented, especially in English teacher-related stories.Therefore, it is necessary to conduct a study investigating teachers' insight perceptions of the implementation of LS in EFL classrooms regarding their CPD.The study addresses the following questions: 1. What are teachers' perceived benefits of LS when implementing LS as an instructional approach for their CPD? 2. What are teachers' perceived difficulties when implementing LS as an instructional approach for their CPD?
Participants
This study occurred in the context of EFL teaching classrooms at a university in the Mekong Delta, Vietnam.Before conducting the study, the research team contacted the university and asked permission to conduct the research there.With the approval, the research team contacted English teachers in charge of teaching English language courses.Regarding the criteria for selecting participants, they had to be full-time teachers working at the same university and being in charge of teaching the same English language course at the time of the study.In addition, these teachers had to prove that they had no explicit knowledge of LS or any experience trying out LS in their practical teaching.To address this, the research team provided a comprehensive training program on LS, including its principles, practices, and implementation strategies.This training aimed to equip the participants with the necessary knowledge and skills to effectively incorporate LS into their teaching and observation practices.Initially, three groups with 23 teachers met the above conditions to become the official participants in this study.The first group has eight members, the second group has seven, and the last group has eight.For the first and second groups, each with more than half of the members, after hearing the research group disseminated the research content, refused to participate due to many reasons, most commonly due to time constraints.As a result, the number of remaining members was so small that these groups were not selected to participate in this study.As for the final group, only two members refused to join for the same reason as above, and the remaining six members agreed to join.Therefore, the six participants who decided to participate implicitly became the official participants of this study.Consequently, the participants were six EFL teachers (5 females and one male), with a mean age of 41.5.Regarding qualifications, they all had Master's degrees in Principles and Methods of English Language Education.Following the training, these teachers (n=6) implemented LS in their classrooms, which was observed and evaluated by the research team to ensure systematic application and to gather data on the impact of LS on their teaching practices.These teachers (n=6) had an average of 14.3 years of teaching experience and did not have any knowledge about LS.
Design
The present study employed a qualitative and case study research design to gain a fuller understanding of Vietnamese EFL teachers' perceptions of the implementation of LS.Qualitative research is believed to provide a rich and deep understanding of social phenomena.At the same time, the case study design allows the researchers to explore a case over time through detailed, in-depth data collection involving multiple sources of information rich in context (Creswell, 2012).Due to time and scope constraints, this study only focused on one case, a group of 6 tertiary teachers who tried out the implementation of LS in their English classes.
The Lesson Study Setting
The current study adopts a model of LS proposed by Murray (2013) comprising six phases.The researchers decided to adopt this model since it is said to be well-fitted and suitable for the teaching context where the study took place.
Phase 1: Forming and Focusing the Group
In this phase, six teachers teaching the same English course were invited to participate in the study.The teachers were all informed about the goals and related issues of LS implementation.Working as a team, under LS perspectives, requires that the teachers must always show their highest trust and commitment in a highly cooperative and supportive working environment.In fact, they had to discuss and share responsibilities for improving both their CPD and student learning.
Phase 2: Collaboratively Planning the Lesson
In this phase, the teachers had to recall their past knowledge and experiences and consider other conditional factors, such as materials, facilities, etc., to ensure they could discuss and best design the lesson for the LS implementation.Specifically, the teachers were first encouraged to work collaboratively on the content or concept to be taught.They also reached out to a so-called effective teaching method employed for their teaching practices.Next, the teachers had to describe the lesson in a detailed manner.Finally, the teachers had to come up with a joint agreement on how students' learning outcomes could be evaluated and outline a particular means of data collection for group members to follow when they observe the teaching of the lesson.
Phase 3: Teaching and Observing the Lesson
In this phase, one of the teachers in the group taught the lesson to the classroom while others observed and gathered data on the students' learning and their working processes.In addition, the lecture was also recorded to facilitate discussion and analysis after the lecture was over.
Data collected during this phase permits teachers in the group to reflect on the extent to which student learning did or did not progress and how each component of the lesson supported or impeded student learning.
Phase 4: Discussing the Lesson
After observing the lesson, all of the teachers sat together (right after school on the day of the lesson demonstration) to reflect on the lesson by discussing the data from the video and observation forms gathered during their lesson observations.As agreed before, the teacher who taught the lesson talked first to express his/her opinions on what went well and what did not, followed by the group members who took turns to share data and what it revealed to them about student thinking and learning.Whenever pointing out strengths and weaknesses, the group would use the video to replay the part they were talking about so that the whole group could better grasp the problem.
Phase 5: Revising and Reteaching the Lesson
In this phase, the group revised the lesson based on contributing ideas of the teachers leading to an updated version of the lesson.Then, a different teacher taught the revised lesson while others again observed and gathered additional data for further improvement of the lesson.
Phase 6: Discussing the revised lesson and summarizing the learning
In this phase, the group members met again to reflect on what they had observed from the teaching of the revised lesson.The teacher secretary of the group noted down members' shared points about their instruction, student learning, and thinking to ensure that a good record of the process was effectively made.Next, the group wrote up a summary of the entire LS implementation cycle, focusing on what has been learned about their teaching and student learning.
This process was repeated over the course of 14 weeks.All the above steps were strictly followed and not interrupted.The video recording of the class was also consented to by the students, so the research team was confident that none of the ethical values in the study were seriously violated.
Research Instrument
This study employed a semi-structured interview activity for data collection since it allows in-depth exploration into how the teachers perceive the implementation of LS for their CPD (Creswell, 2012).In addition, the semi-structured interview guide created the best conditions and freedom for the participants to express whatever ideas of their perspectives.Also, unstructured parts of the interviews allowed the researchers to collect any unanticipated data deemed vital and relevant to conceptualize the findings of the study.
Data Collection Procedures
This study was carried out over 16 consecutive weeks, including the first week for the LS orientation and the last week for the closing.Prior to the start of the study, one of the researchers emailed the teachers in the department to provide an information sheet about the study project and asked for their voluntary participation.
For data collection, the teachers had to implement LS within 14 weeks under the close supervision and facilitation of one of the researchers since that one was well-trained and familiar with the principles and techniques of LS.During this period, when other teachers were observing classes as part of the LS process, they utilized a structured evaluation method.This method included a detailed observation checklist and a reflective journal.The checklist was designed to focus on key aspects of teaching and learning within the LS framework, such as the effectiveness of lesson delivery, student engagement, and the application of new teaching strategies.Additionally, the teachers were encouraged to use reflective journals to document their observations, insights, and suggestions for improvements in a more narrative and subjective form.Furthermore, the researchers were present in some classroom sessions to observe the implementation of LS.This direct observation by the researchers aimed to provide an additional layer of data collection and to cross-verify the information gathered from teacher observations.At the end of the semester, face-to-face semi-structured interviews were conducted, allowing the participants to describe their experience of LS implementation in their own words.To allow for the teacher participants' best convenience and comfort, the interviews were conducted in Vietnamese, the mother tongue of both the interviewer and interviewees (Creswell, 2012;Tran & Phan, 2021).The participants were informed that they could end the interview or withdraw from participation at any time without penalty.During the interview, the researchers probed responses only for elaboration or clarification.
Each of the interviews lasted approximately 90 minutes, and they were all audio-recorded.All participants were interviewed once, with a potential for a second round if confusing views needed to be clarified or explained.Pseudo names (Teacher 1, Teacher 2, Teacher 3, Teacher 4, Teacher 5, and Teacher 6) were also created for the teachers in the narratives to keep confidential.The researchers transcribed the interview recordings verbatim, then coded data into themes.All the questions addressed teachers' perceptions of benefits as well as their perceived difficulties when implementing LS as an approach for their CPD.It is important to note that the presence of the researchers in the classroom could potentially affect student performance and behavior, which was considered in the analysis.The researchers observed a few discrepancies between their observations and those reported by the teacher observers, mainly in the areas of student engagement and the effectiveness of certain teaching strategies.These discrepancies were further explored during the interviews to understand the teachers' perspectives and interpretations of the classroom dynamics.
Data Analysis
The data of the study were analyzed by using a thematic method.To ensure the credibility of the analysis, several steps were taken.First, the interview transcripts were first coded through an integrative process, involving multiple researchers to cross-check and validate the coding.
This was followed by a meticulous process of categorizing the coded data into two emerging themes (teachers' perceived benefits and difficulties in implementing LS) guided by the two research questions.Additionally, to enhance the reliability of our thematic analysis, we conducted periodic peer reviews and discussions among the research team.Murray's (2013) framework with six steps of LS implementation was used as a guideline for categorizing the LS-related issues emerging during the process of LS implementation.This framework, being well-established in the field, further added a layer of rigor to our analysis methodology.
Findings Teachers' Perceived Benefits of Lesson Study for their Continuous Professional Development
The results indicated that all of the teachers (n=6) believed that LS had positive impacts on encouraging teachers' exchange activities, allowing them the best opportunities to share ideas and receive constructive feedback from their fellow teachers.Teacher 3, for instance, said: "I realized that this model (LS) has created a very cooperative working environment where other teachers and I could actively and freely discuss the teaching lesson, exchange experiences, or share things happening in the classroom."In the same vein, LS was said to offer the teachers favorable conditions to sit together to "discuss the lecture, to share ideas on effective teaching and also to figure out possible problems (Teacher 4)".
It is also revealed that the teachers received significant contributions and constructive feedback from their fellow teachers while participating in the LS model.As reported by the teachers, they could not precisely evaluate the success of their classroom teaching, for instance, how to know whether the students did well in their learning (Teachers 2, 3, and 5) or how to address what the students were not satisfied with (Teachers 3, 5, and 6).Through LS, however, the teachers could share data they collected (as an observer) or listen to feedback (as a teacher being observed) which might help them understand more about student learning.One of the teachers, for instance, mentioned: "Through accumulated ideas and feedback given by other teachers, I can uncover many hidden issues regarding my student learning" (Teacher 3).The teachers also admitted that through LS, they are equally respected in expressing their opinions regardless of their status, whether they are more or less experienced or old or young teachers.Teacher 2 said: "We are absolutely free to express our ideas or give feedback to others on the basis of mutual respect rather than criticism."Similarly, Teacher 6 reported: "I work in the spirit of mutual respect, sharing and giving suggestions for mutual improvement, not to criticize who is good or who is bad." The results showed that most of the teachers (n= 5 out of 6) agreed that LS helps them much in deepening knowledge of the teaching lesson through different means of sharing discussions contributing to adjusting their teaching methods to fit with students' learning.Teacher 2, for instance, expressed that through collaborating with the teachers in the group before and after conducting classroom-based teaching, she could "gain a deeper understanding of the teaching content" because she had to "explain questioned issues with clarification and negotiate with other teachers in order to come up with a common agreement."As a result, this suggested possible adjustments or even a complete change in her teaching methods.Similarly, teacher 4 admitted that he could "realize my strengths and weaknesses regarding my understanding of the lesson content through working with fellow teachers," which accordingly enabled him to make necessary changes in teaching.The teachers also reported that through observing others' teaching, they could gain better knowledge and experience of teaching such as "I could consciously know what is good, what is bad and thus I can do my teaching at best" (Teacher 6), "this enabled me to explicitly see the connection between what the teacher taught and what the students learned.I could also learn how to analyze, draw conclusions, and modify my future teaching" (Teacher 3).
The results additionally showed that most of the teachers (n=5 out of 6) believed that through LS, they could better and effectively understand students' learning activities, thereby adjusting their teaching accordingly.To illustrate, Teacher 1 shared that when she observed the student learning, she could "learn more about the process of students' acquiring knowledge."Other teachers also showed that they could identify several students' addressing needs, for example, "I know what support students really need in relation to their learning processes" (Teacher 2) or "how students are grasping the lesson content, how they are interacting with each other" (Teacher 1) while observing them learning.The teachers also reported that they obtained valuable information about student learning from their fellow teachers' sharing of collected data from classroom observations.One teacher reported: "Thanks to the input from my colleagues, I obtained useful information about the student learning such as whether they were bored or excited or whether they could understand the lesson content and so on" (Teacher 6).
The results indicated that half of the teachers (n=3) held a strong belief that LS helps promote their motivation on the path to CPD.Under LS perspectives, it is evident that the teachers had to take active roles in participating in activities that are suitable for their CPD, as reported by a teacher: "It is never enough for us to learn.You know, when joining LS, we had to actively participate in activities organized by the group where we needed to show our responsibility and therefore learned a lot" (Teacher 3).In this respect, teacher 3 felt "confident and more motivated in my teaching career."Similarly, Teacher 4 added: "LS is an important catalyst for teachers like me to get more motivation to develop my professional skills […].Now I am very motivated and willing to implement LS as an approach for my CPD".
Teachers' Perceived Difficulties in Lesson Study Implementation
The results indicated that all of the teachers (n=6) showed a lack of confidence in joining discussions during the first stage of LS implementation, mainly due to their poor knowledge of LS and sedentary working habits during the early implementation stage.Teacher 4, for instance, reported that he found it "really hard to figure out what to do, and how to do it […].I am not confident and ready to do it" because LS was relatively new to him.Regarding teachers' working culture, while implementing LS, it is admitted that they were still "passive and not really willing to share views" (Teacher 6).Similarly, Teacher 5 shared: "Most of the teachers did not like sharing their ideas with others.They tended to be quiet and tentatively listen rather than being an active speaker in discussion".
All teachers (n=6) participating in the study shared a standard view that it is hard for them to manage their time so as to participate in LS activities thoroughly.All teachers reported that they had to frequently and regularly meet the group to discuss the working plans, prepare lesson plans, and so on while carrying out a hefty teaching load assigned by their university.Teacher 4, for instance, said that: "You know, we each have a different and busy teaching schedule assigned by the university, so it is complicated to make an appointment," or "I have to teach four subjects this semester.I have to prepare the lesson and go to classes very often which takes me much time" (Teacher 6).In addition, some teachers (n=3) reported having to do extra tasks, i.e., teacher supervisors, research work, or community service besides their assigned teaching work.Therefore, it is tough for them to "arrange a suitable time for the whole group to meet together" (Teacher 1) because the teachers had to do many tasks at the same time, as reported by Teacher 3, stating that: "[…] being a teacher supervisor, I had to meet students, help them with paperwork and other stuff".
The results showed that all the teachers (n=6) reported difficulties making a joint conclusion or decision when working together.One teacher reported that "it is not easy to reach consensus due to the fact that different teachers shared different points of view about the same thing" (Teacher 2).Another teacher recalled: "Because we shared different points of view when working together, disagreement is inevitable.You know, it is tough to come up with consensus because everyone wants their opinions to be used" (Teacher 6).In addition, teachers' fear of making others lose faces or be judged as disrespectful prevents the teachers from giving direct and constructive feedback resulting in impeding the effectiveness of the LS implementation, which occurred during the early stage of the implementation.For instance, one teacher expressed: "I think that if I give feedback too directly and seriously, I am afraid of making them lose face or upset.So, I just said something around the corner" (Teacher 5), or in the case of Teacher 1, who reported: "When I first joined the project, I did not dare to express my opinions directly and constructively.You know, I need to show my respect to others".
Discussions and Implications
Regarding the first research question, the results showed that most of the teachers agreed on the idea that LS was influential in encouraging teachers' exchange activities which might create the best opportunities for them to share ideas and receive constructive feedback from their fellow teachers contributing to developing their CPD (Mon et al., 2016;Matoba et al., 2007).A possible explanation for this finding could be that LS requires teachers' collaborative participation in sharing activities with their fellow teachers so as to reach a consensus for the improvement of instructional practices (Wood & Cajkler, 2018;Marble, 2007).This reflects previous research stating that one of the actual values of LS is evaluated through the process of teachers' active participation while collaboratively working with their peers (Chenaul, 2017).Another possible explanation could be that LS encourages the teachers to be honest, productive, and constructive in discussion (Murray, 2013).Fellow teachers are encouraged to show their ideas and feedback in a manner of mutual respect rather than criticism.Therefore, the teachers are said to be more confident and actively engaged in discussion (Bozkurt & Ozdemir, 2016;Marble, 2007).From these perspectives, this study suggests that educational leaders need to place top priority on supporting teachers with policies in favor of promoting collaboration.Building a broader collaborative culture that recognizes the value of group work allows the teachers to interact with each other's knowledge and practical skills more comfortably and efficiently.
All teachers participating in the study also agreed that LS helps them deepen their knowledge of the lesson, contributing to their teaching skills.Additionally, this may enable the teachers to adjust their instructional methods to fit their student outcomes.This finding can be explained by the fact that LS creates favorable conditions for the teachers to plan teaching strategies and logically conduct the lesson through its different steps (e.g., collaboratively planning, discussing the lesson, etc.) (Mon et al., 2016;Stepanek et al., 2007).By doing this, it ensures that all the teachers in the group could be better equipped with sufficient knowledge of content, student agents, instruction, and so on, serving back to their CPD as well as satisfying the need for student learning (Matoba et al., 2007).Another possible explanation could be that LS allows the teachers to experience the lesson through teaching themselves and observing others' teaching.Needless to say, teaching and teaching observations followed by reflection on teaching significantly deepen the teachers' acquired knowledge to varying degrees (Mon et al., 2016;Özdemir, 2019;Cheung, 2011).It is, therefore, suggested that the teachers should facilitate more occasions for members' narrative sharing and self-critique.To promote this, it is necessary for the leaders, one of the key catalysts of the implementation, to have policies in which the teachers should be member-led.In other words, the teachers need to be considered the owners of the whole implementation process rather than passive doers and feel as if they are being forced to do the implementation.
The teachers also agreed that LS allows them to understand their students' learning activities and processes more.It could be explained that when the teachers visited the classroom engagements, they could gather evidence about a student, their motivation, or any reactions to the teaching.This might help them better understand student understanding knowledge and skill in specific areas (Murray, 2013).LS can be an effective tool to collect data about students' learning and address needs in different ways, such as classroom observations, teachers' notes, and the like (Matoba et al., 2007;Mon et al., 2016).This finding agrees with Bozkurt and Ozdemir's (2016) study, which stated that LS helps increase teachers' knowledge of students' learning.In fact, teachers cannot cover all student activities by focusing on their teaching.However, when they observe someone else's class, they can see more clearly what students do and do not do.This helps them somewhat adjust their teaching style.Therefore, policies to encourage teachers to participate more in classroom observations need to be proposed and implemented on a broader scale.In the spirit of mutual development, both the observing teacher and the observed teacher need to have a spirit of cooperation and demand.
It is also reported that the teachers were highly motivated when implementing LS as an approach for their CPD.This can be explained by the teachers' realization of the benefits and valid values of LS, particularly when they become more confident and active in collaborating with other teachers and professionals in their teaching (Chenaul, 2017;Marble, 2007).Jhang (2020) stated that the teachers participating in LS feel that they could gain a lot of valuable things such as self-confidence, professional status, attention from colleagues, and so on, which have an effect on their motivation in particular and their profession as a whole.It is, therefore, implied by educational leaders and policymakers that they have to encourage teachers to employ LS more frequently and continuously.Creating favorable conditions and motivating them, by all means, might help the teachers maintain their motivation to continue LS.Otherwise, it can be possible that implementing LS is unlikely to be successful since this approach requires much time and personal effort from the teachers (Özdemir, 2019).
Regarding the second research question, the results indicated that the teachers were not confident in working with others during the early stage of the implementation.One possible reason for this could be that because this group of teachers had not ever been trained or introduced to LS, they were, needless to say, unfamiliar with the knowledge and practices of LS.Another possible reason could be evident in teachers' shyness and passive working styles.They tended to be passive listeners rather than active speakers (Nguyen & Jaspaert, 2021).Therefore, it is suggested for the success of LS implementation that, before the implementation, necessary policies and supporting actions regarding issues of LS implementation must be taken into consideration to ensure that the teachers are well prepared and have a clear mindset about what they are doing.Furthermore, to promote teachers' active roles and confidence in joining group discussions, more training activities should be provided to develop teachers' understanding of LS (theoretically and practically).
It is also reported that the teachers in the study had difficulties in time management.It is undeniable that time management is likely to be a big concern since the teachers had to take a heavy workload assigned by the university, and they had to implement LS at the same time (Mon et al., 2016).Regardless of the teachers' creativity and flexibility in scheduling activities for the implementation, they had troubles with regard to time management.It is therefore suggested that educational leaders and policymakers need to take immediate actions, for example, reducing the workload to give place for the LS implementation or giving the teachers more favorable conditions (e.g., available resources, financial support, equipment, and facilities) enabling them to do the implementation more successfully (Mon et al., 2016).It is also noted that the teachers need to be faithful volunteers who show the best effort, devotion, and commitment to the success of the LS implementation (Eraslan, 2008;Fernandez & Yoshida, 2004).
The results also revealed that all teachers had difficulties reaching a consensus for joint conclusions and decisions while implementing LS.This finding was unsurprising because different people share different views on the same thing, as reported by many of the teachers in the study.In addition, it is possible that the teachers were afraid of breaking good rapport with fellow teachers.Vietnamese people have been heavily influenced by the ideology of the Confucian Heritage Culture, stating that collectivism and hierarchical relationships must be a focus (Nguyen & Jaspaert, 2021).Therefore, it could be the case that the younger or less experienced teachers in the study tended to agree with the older or more experienced ones, although they were not satisfied.From these perspectives, it is suggested that setting standard rules as a norm for the teachers before they officially work together should be a good idea since this might help them build a clear vision of the everyday working culture and acknowledge the value of collaborative work (Özdemir, 2019).In addition, it is worth mentioning that the presence of experts in the field of LS plays a crucial role in guiding teachers to consensus (Stepanek et al., 2007).By extending knowledge and other related issues in LS implementation, the expert can provide fellow teachers with professional support and means to conduct the implementation, especially by sharing ideas and developing a common agreement for joint discussions and decisions.Therefore, it is suggested that choosing a suitable person who needs to be fully prepared with knowledge of LS and shows excellent levels of enthusiasm for promoting LS implementation is vital to the team coach (Demir et al., 2013).
Last but not least, the current study's findings, where teachers adapted their teaching methods based on collaborative discussions and self-reflection, align with previous research indicating that LS fosters a culture of continuous learning and experimentation among educators.For instance, a study by Rappleye and Komatsu (2017) also found that LS encouraged teachers to innovate and adapt their teaching strategies, leading to enhanced pedagogical practices.Additionally, the use of various materials in LS, as indicated in this study, supports the findings of Leavy and Hourigan (2016), who reported that LS participants often introduce new resources or modify existing ones for greater effectiveness.This parallels the current study's indication that teachers likely altered educational materials based on feedback and discussions during LS sessions.Besides, the evolution of teachers' behaviors towards more reflective and feedback-oriented practices, as seen in this study, is consistent with the findings of Akiba and Wilkinson (2016), who noted that LS promotes a reflective and collaborative teaching culture.The current study extends this understanding by specifically highlighting the increased openness to feedback and willingness to experiment.The significant contribution of LS to teachers' CPD in this study is in line with Holden (2023), who found that LS provides a platform for professional learning through collaborative practices.The current study enriches this perspective by detailing how LS facilitates sharing, feedback reception, and observational learning, leading to professional growth.
Conclusions
It was apparent that the teachers participating in the study addressed certain benefits of LS significantly contributing to their CPD, such as encouraging teachers' exchange activities and helping them deepen their knowledge of lesson content, understanding students' learning activities and working processes, and being more motivated in pursuing their CPD.However, the teachers who approached LS in this study could not avoid specific difficulties throughout the project.It is reported that the teachers had to deal with issues of time management and the teachers' lack of confidence in taking active roles in group work.They also encountered difficulty reaching a consensus for joint work due to the influence of muti-faceted aspects.
This study significantly contributes to the existing literature on LS and CPD in the context of Vietnamese education.By highlighting the practical application of LS in a Vietnamese university setting and its impact on teachers' professional growth, the study provides valuable insights into the adaptability and effectiveness of LS in a non-Western context.This is particularly relevant given the limited research on LS in Southeast Asian educational settings.The findings underscore the potential of LS as a viable method for teacher development in diverse educational contexts, thereby expanding the understanding of LS beyond its traditional Japanese roots.
Beyond mere recommendations, this study offers tangible implications for educational practice.It underscores the need for educational leaders and policymakers to actively support and facilitate LS implementation, which is crucial for its success.This includes providing resources, training, and time for teachers to engage in LS.Additionally, the study emphasizes the importance of teachers' mindset and adaptability in embracing LS.This insight can guide teacher training programs to focus not only on the technical aspects of LS but also on cultivating a collaborative and active working culture among teachers.
However, it is important to acknowledge certain limitations of this study.Firstly, the study was conducted in a specific university setting in the Mekong Delta, Vietnam, which may limit the generalizability of the findings to other contexts, both within and outside of Vietnam.Secondly, the study involved a relatively small sample size of teachers, which may not fully represent the diverse experiences and perspectives of all teachers in similar settings.Finally, the study's focus on teachers who had no prior experience with LS might have influenced the outcomes and experiences reported.Future research could expand on these findings by including a larger and more diverse sample of teachers, including those with prior experience in LS, to provide a more comprehensive understanding of LS's impact on CPD in various educational contexts. | 9,597 | 2024-02-28T00:00:00.000 | [
"Education",
"Linguistics"
] |
Numerical simulation of attosecond nanoplasmonic streaking
The characterization of the temporal profile of plasmonic fields is important both from the fundamental point of view and for potential applications in ultrafast nanoplasmonics. It has been proposed by Stockman et al (2007 Nat. Photonics 1 539) that the plasmonic electric field can be directly measured by the attosecond streaking technique; however, streaking from nanoplasmonic fields differs from streaking in the gas phase because of the field localization on the nanoscale. To understand streaking in this new regime, we have performed numerical simulations of attosecond streaking from fields localized in nanoantennas. In this paper, we present simulated streaked spectra for realistic experimental conditions and discuss the plasmonic field reconstruction from these spectra. We show that under certain circumstances when spatial averaging is included, a robust electric field reconstruction is possible.
Introduction
Attosecond science and nanoplasmonics are two rapidly evolving fields that involve a large number of phenomena and have an impact on many other areas of science and technology beyond fundamental physics. Attosecond science based on high harmonic generation [1,2] investigates ultrafast phenomena in matter and includes areas such as production of single attosecond pulses [3,4], capturing the electron dynamics in atoms on a sub-femtosecond time scale [5][6][7] and the probing of the molecular structure and dynamics [8][9][10]. Nanoplasmonics is the study of collective electron excitations at metal surfaces and nanostructures, where the electromagnetic fields can be confined on the nanometre scale. The enhanced localized nanoplasmonic fields can be used for the guiding of light [11,12], for spectroscopy and biological sensing [13][14][15], for the control of light-matter coupling [16] and for the generation of high harmonics from gas using the direct output of a short-pulse laser oscillator [17].
The nanoplasmonic fields can decay on a few-femtosecond temporal scale [18], and characterizing their temporal profile, which is generally different from the temporal profile of the exciting laser pulse, is crucial for understanding the high harmonic generation by the plasmonic enhancement process and other nanoplasmonic enhanced ultrafast phenomena. Nanoplasmonic fields can be measured by combining nano-and atto-science, as proposed by Stockman et al [18], who suggested exploiting the attosecond resolution of the streaking technique [3,19,20] with the nanometre spatial resolution of the photoelectron emission microscope (PEEM) [21,22]. In the experiment proposed by Stockman et al [18], electrons are ionized from a plasmonic nanostructure by the attosecond extreme ultraviolet (XUV) pulse and then accelerated by the local plasmonic field excited by a few-cycle laser pulse at nanometre-size 'hot spots' on the surface. Electrons are detected with an energy-resolving PEEM as a function of the time delay between the XUV pulse and the laser pulse. Since the PEEM can image the photoelectron emission, a random nanostructure can be studied and the temporal evolution of the plasmonic field at different hot spots can be detected. Compared to indirect methods based on autocorrelation previously used for characterizing the plasmonic fields [22][23][24][25], streaking offers direct access to the plasmonic electric field.
To study the local fields of the nanoantennas that have been used to generate high harmonics using the laser oscillator [17], the spatial resolution of PEEM is not needed because the photoelectron signal is a sum of signals originating from an array of identical Figure 1. Geometry of the antennas used for the calculation that were designed to generate a uniform region of significant field enhancement. (a) Top view: gold rectangular antennas form an array on the sapphire substrate and the field is enhanced in the gap between them. Dimensions are in nanometres. (b) Side view: two gold rectangles are placed on a sapphire substrate and the incident field, polarized in the x-direction, is enhanced in the gap between them. Electrons are streaked by the field in the gap and detected with a TOF spectrometer.
antennas, and a time-of-flight (TOF) detection can be employed. Moreover, for applications such as high harmonic generation by plasmonic field enhancement, a high degree of uniformity and reproducibility of the antenna array is needed. The typical size of the nanoantennas is 50-100 nm with field-enhanced regions in the gaps of the order of 10-50 nm, which is considerably larger than the spatial extent of hot spots (∼1 nm) in the experiment proposed by Stockman et al [18]. The electron escape time from the antennas is typically considerably longer than the laser period and, as we show, the regime of streaking from the antennas is between the standard oscillatory regime known from the gas phase experiments [3,19,20] and the instantaneous regime from a plasmonic nanosystem [18]. A numerical analysis is therefore needed in order to predict the streaked photoelectron spectra from nanoantennas.
In this paper, we present a numerical simulation of the electron motion in nanoplasmonic fields in the gap between the antennas. We perform calculations for realistic experimental parameters of rectangular gold antennas on a sapphire substrate, organized in an array as shown in figure 1. We calculate the plasmonic fields by a finite-difference time-domain (FDTD) method and then solve the classical equation of motion of the electron in the plasmonic field for different time delays between the electron birth and the plasmonic pulse. We consider electrons ionized by the XUV pulse from gold. We use centre-of-mass analysis of the simulated streaked spectra to determine the plasmonic field, which is then compared with the field used for the calculation. We show that when spatial averaging of the initial electron position within the antenna is included, the plasmonic field can be reconstructed with high accuracy.
Two regimes of streaking
Attosecond streaking from nanoplasmonic fields differs from the standard streaking in the gas phase due to the field localization on a nanometre scale. Therefore, before presenting the results of the numerical simulation, we briefly summarize the differences between 'standard' streaking and 'instantaneous' streaking.
As pointed out in [18], in the gas-phase streaking experiments, the electron ionized by the XUV pulse moves in the laser field until the laser pulse is over (figure 2(a)). The focal volume of the XUV pulse is smaller than the focal volume of the laser pulse; therefore, the electron experiences an approximately spatially uniform laser field. The electron escape time t e from the focal volume is longer than the duration of the laser pulse. In this regime, referred to as the standard oscillatory regime, the energy shift of the electron is determined by the value of the vector potential at the instant of the electron's birth. The final electron velocity v f is [20] v where v 0 is the initial electron velocity at the time of birth t 0 , e and m e are the electron charge and mass, respectively, and A(t 0 ) is the vector potential of the laser field at the electron's time of birth. A different situation occurs when the spatial extent of the field is so small (of the order of nanometres) that the electron leaves the field in a shorter time than the laser field period T ( figure 2(b)). This is the regime of instantaneous emission [18], defined by where t e is the electron escape time from the enhanced field region. For an electron with an initial energy of E 0 = 80 eV and a characteristic enhanced field dimension of 1 nm, the escape time is t e = 190 as, which is much smaller than the laser period of 2.7 fs, and therefore the streaking regime is effectively instantaneous. In the instantaneous regime the final electron energy E f (r, t 0 ) is related to the local electrostatic potential φ(r, t 0 ) at the electron's time and place of birth by [18] In this regime, the electron energy does not oscillate as the electron moves in the electric field, but the electron is rather 'kicked' by the instantaneous value of the plasmonic field. An intermediate situation occurs in nanoantennas with a typical gap size of ≈ 30 nm. In this case, the electron escape time is t e ≈ 5.7 fs, which is longer than the laser period but typically shorter than the duration of the excited plasmonic field. Consequently, the streaking regime lies between the instantaneous and the oscillatory regimes. Streaking in this intermediate regime is not amenable to a simple analytical form and will be simulated in section 2.3 by numerical integration of the electron's equation of motion in the nanoplasmonic field.
Plasmonic field calculations
For the simulation of nanoplasmonic streaking in the intermediate regime, we consider an array of gold rectangular antennas placed on a sapphire substrate. The geometry and coordinates are defined in figure 1. To design antennas with resonance at 800 nm and to simulate the time-dependent plasmonic field, we used Lumerical FDTD solutions, a proprietary software implementing the FDTD method. Three-dimensional simulations were performed in the time domain with perfect-matched-layer boundary conditions, and non-uniform adaptive meshes were used in order to model accurately the antenna gap area with a mesh size of 1 nm. The total simulation time was 400 fs with a time step of 0.953 as. The convergence of the numerical calculations with respect to the mesh size and the total simulation area was checked. The optical response of gold was modelled through the fit of Johnson and Christy [26] experimental data by multi-coefficient models within the wavelength range of 600-1000 nm. The dimensions of the antennas with the resonance centred at 800 nm were calculated to be the following: height h = 40 nm, gap width w = 30 nm, length of the antenna L = 100 nm and width of the antenna in the y-direction l = 60 nm.
To calculate the time-dependent plasmonic field, an incident pulse polarized in the xdirection with a Gaussian temporal profile, a duration of 5 fs and field amplitude normalized to 1 was used in the FDTD simulation. The actual field strength for the simulation of streaking will be specified later. The calculated plasmonic field strength is normalized to the incident field. We assume that the time dependence of the plasmonic field is the same at all points in space and we decompose the total plasmonic field E(x, z, t) into its spatial and temporal components: This assumption, supported by our FDTD simulations, is needed for the reconstruction of the electric field from the streaking process, where only one time-dependent field E(t) is retrieved. Note that the XUV pulse in the streaking experiment does not affect the plasmonic field because of its high photon energy far from the resonance, and is not simulated by the FDTD method.
The fields are shown in figure 3(b). The black line shows the incident field and the blue line shows the time-dependent plasmonic field E x (t) at the position x = 10 nm, z = 0 nm. The calculated plasmonic field is longer in duration than the incident field; it lasts for more than 10 fs after the incident pulse.
Classical simulation of nanoplasmonic attosecond streaking
In the streaking experiment, the attosecond XUV pulse ionizes an electron by a single-photon process and the electron is then streaked in the plasmonic field excited by the laser pulse ( figure 4(a)). At the beginning of the streaking calculation (which is two-dimensional in the x z-plane), the electron is born at time t 0 at the initial position (x 0 , z 0 ) with an initial velocity v 0 at an angle α 0 from the z-axis. The time t 0 is the delay between the birth of the electron and the plasmonic field E(t), where the peak of the incident pulse is at time 15 fs; see figure 3(b). The initial position is chosen at the edge of the gold antenna (x 0 = w/2, z 0 = 0 − h). We consider ionization by an XUV pulse with a central photon energy of 90 eV and a Gaussian temporal profile with full-width at half-maximum (FWHM) duration of τ = 580 as. A rather long XUV pulse duration (compared to the present state of the art [4]) is chosen in order to keep sufficient spectral resolution for a streaking measurement. The initial electron velocity v 0 is determined by the single-photon ionization process by the 90 eV XUV pulse and is scanned through the initial photoelectron spectrum from gold, which contains contributions from several energy bands [27]. The photoelectron spectrum used for our calculations ( figure 4(b)) was obtained from a spectrum measured with a narrow line source with a photon energy of 1487 eV, corrected for energy-dependent ionization cross-sections and a convolution with the spectral content of the XUV pulse. We have found, in our preliminary measurements, that the photoelectron signal from the dielectric substrate is considerably lower than that from gold; therefore, we do not consider the photoemission from sapphire.
The XUV pulse will ionize electrons from the whole sample regardless of the field enhancement. Therefore, there will be a large background signal of electrons ionized from the top surface of gold antennas, which will not be streaked in the enhanced field in the gap. To suppress these background electrons, a layer of dielectric such as SiO 2 can be deposited on the top surface of gold antennas in order to eliminate ionization from these areas. In this arrangement, only electrons from gold that are streaked in the enhanced field in the gap will be observed.
Once the electron is born its equation of motion in the plasmonic field is numerically solved. The trajectory of the electron is numerically integrated with a leapfrog algorithm with a time step of 48 as. The field that enters the equation of motion is the plasmonic field calculated by the FDTD method. The electric field strength of the incident laser pulse is taken to be 1.03 × 10 9 V m −1 (corresponding to an intensity of 1.4 × 10 11 W cm −2 ) and is enhanced in the gap as shown in figure 3(a). The calculation finishes when the plasmonic pulse ends or when the electron hits the gold antenna or when the electron leaves the region of the nanoplasmonic field. For this calculation, the end of the plasmonic pulse occurs when the field strength drops to 0.005 of its maximal value (which is at time ∼45 fs).
To simulate the streaked spectrum that will be detected by the spectrometer, we consider an ensemble of electrons with initial energy distribution given by the initial photoelectron spectrum and we record the final electron velocity v f at the end of the simulation. A TOF detector with acceptance half-angle θ = 1 • is placed at an angle θ 0 = 20 • from the z-axis. The TOF detector has to be placed at an angle from the laser polarization in order to detect electrons that leave the gap and do not hit the antenna. This geometry slightly reduces the modulation of the final electron energy compared with detection parallel to the direction of the field polarization. If the direction of the final electron velocity is towards the TOF, i.e. if tan(θ 0 − θ ) v fx /v fz tan(θ 0 + θ), the electron is detected and contributes to the final streaked spectrum with a weight given by its initial spectrum. Because the relationship between the initial and the final electron direction is not known in advance, we perform calculations for a range of initial electron angles α 0 and count only electrons that are detected. The distribution of the initial angles α 0 is assumed to be isotropic. So far, we considered only electrons ionized at a single time t 0 and did not take into account the effect of the XUV pulse duration on the final streaked spectrum. The XUV pulse duration is included in the calculation by weighting over a distribution of times of births; that is, at a given time delay t 0 , the number N τ (t 0 ) of detected streaked electrons is given by where N (t 0 + t j ) is the number of detected electrons at time t 0 + t j , I XUV (t j ) is the normalized intensity envelope of the XUV pulse and the index t j runs through the duration of the XUV pulse (−1.5τ < t j < 1.5τ , where τ is the FWHM duration of the XUV pulse). In this way, the number of streaked electrons at a given delay t 0 includes the contributions of streaked electrons at times around t 0 with a weight given by the intensity of the XUV pulse. Finally, the detected streaked spectra were averaged over the initial electron z-position at the edge of the antenna.
Simulation results
To gain insights into the streaking from localized fields, we first examine trajectories of individual electrons in the gap between antennas. Figure 5(a) shows trajectories of electrons with an initial energy of 80 eV and initial angle α 0 = 35 • ionized at two different times, t 0 = 10 and 18 fs, respectively. An electron with this energy can come from the valence band of 9 gold irradiated by our 90 eV attosecond pulse. One can see that the plasmonic field does not significantly affect the direction of the electrons. The electron trajectory is slightly modulated mainly for the electron born at 10 fs.
More information can be obtained from time-dependent electron energies, shown in figure 5(b) for the same conditions as in figure 5(a). The streaking regime is not instantaneous and the electron energy is modulated as the electron propagates in the plasmonic field. The situation is also different from the standard gas phase streaking because the modulation of the electron energy does not go to zero with time, when the electron leaves the enhanced field in the gap before the end of the plasmonic pulse. When the electron is born early (t 0 = 10 fs; solid line) it will leave the enhanced field region before the end of the plasmonic pulse (at time 21 fs), and there is no direct relationship between the final and the initial electron energy. Electrons born at longer delays (t 0 = 18 fs; dotted line) leave the field at later times, in this case at 29 fs. At this time (29 fs) the plasmonic field is already small (see figure 3(b)), and the amplitude of the electron energy oscillation is low. Therefore, the standard oscillatory regime of streaking is being approached.
For comparison, figure 5(c) shows the time-dependent energies of a slower electron with an initial energy of 14 eV, which might originate in inner shells of the substrate (other conditions are the same as those for the 80 eV electron). In this case, the electrons spend a longer time in the enhanced region because they are slower. For both times of birth shown, the oscillation amplitude of the electron energy goes down with time. Therefore, the regime of streaking is almost the standard regime. The energy of the electron born at t 0 = 10 fs (which is at the peak of the field) oscillates and the final energy is roughly the same as the initial energy (14 eV) because the vector potential at the time of birth is zero. The energy of the electron born at 18 fs, which corresponds to the zero value of the field, and therefore a maximum of the vector potential, is shifted to a final value of 12 eV.
One can see that electrons born early in the pulse (t 0 15 fs in our case) are streaked in the intermediate regime. For electrons born later, the regime approaches the standard oscillatory regime, because the electrons leave the enhanced field region towards the end of the plasmonic pulse. Furthermore, the streaking regime depends on the electron initial energy. Electrons with a low initial energy spend a longer time in the region of enhanced field and therefore the oscillatory regime is approached at shorter delays. However, the modulation of the electron energy is lower for slower electrons [19,20].
We now turn to a simulation of the streaked spectra from which the plasmonic field can be reconstructed. Figure 6(a) shows the simulated streaked spectra from gold with electrons originating at a single position at the face of the antenna (x 0 = 15 nm, z 0 = 23 nm). The final electron energy is modulated and contributions from the valence band (energies around 80 eV) and the 5p band (energies around 30 eV) can be recognized. There are oscillations in the electron energy in the simulated spectra for short time delays (< 10 fs), resulting from the fact that the electron leaves the enhanced field region before the end of the plasmonic pulse. These 'ghost' oscillations, which are a consequence of the intermediate regime, do not correspond to the full oscillations of the plasmonic field. The ghost oscillations at early delays are suppressed when the streaked spectra are averaged over the initial electron z-position, as shown in figure 6(b). Here, the initial position of the electron was varied in the range of z 0 = 1-39 nm and the initial x-position was kept at 15 nm. The oscillations at early times are smeared out because of different contributions of electrons born at different heights, and the simulated streaked spectra resemble the streaked spectra in the standard oscillatory regime.
Electric field reconstruction
The plasmonic field is reconstructed from the simulated streaked spectra by a centre-of-mass analysis [19]. For the analysis, we assume that the streaking is in the standard oscillatory regime and the final electron velocity is related to the vector potential of the field at the electron's time of birth by (1). The assumption of the standard regime of streaking is justified by the fact that at later delays the standard regime is approached and even at early delays the spatially averaged streaked spectra closely resemble the standard streaked spectra. We now further assume that the plasmonic field has only an x-component and the angle between the initial electron velocity and the polarization direction is π/2 + θ 0 . Taking the square of (1) and solving for A yields where E f = m e v 2 f /2 is the final electron energy which is obtained by calculating the centre of mass of the streaked electron spectra where N (E) is the number of electrons with the final energy E and the summation is performed over the valence energy band of gold. The plasmonic electric field is then obtained by differentiating the vector potential (5) with respect to time t 0 and will be compared with the plasmonic field used for the calculation ( figure 3(b)).
To quantify the accuracy of the electric field reconstruction, we use a root mean square (rms) field error defined by [28] where E 1 (t) and E 2 (t) are the two normalized complex electric fields to be compared. A value of ε = 0 corresponds to a perfect reconstruction; larger values of ε corresponds to less agreement between the two fields. The maximum value of ε for normalized fields is 2, which corresponds to fields that differ only in sign (E 1 = −E 2 ). (6) as a function of the delay between the XUV pulse and the plasmonic pulse for the streaked spectra from gold for a single initial electron position. To calculate the centre of mass, the spectra were integrated over the energy range 60-110 eV. There are ghost oscillations in the centre of mass at early delays, which will present a problem for the electric field reconstruction. These oscillations are suppressed when the simulated spectra are averaged over the initial electron z-position, as can be seen from figure 7(c). This is due to the different contributions of electrons born at different heights.
The plasmonic electric field is reconstructed from the oscillations in the centre of mass by differentiating (5). Both the reconstructed field and the original field used for the calculation are shown in figures 7(b) and (d). The original field is normalized to have the same maximum as the reconstructed field in order to facilitate a comparison of the temporal profile. The amplitude reconstruction will be addressed later. For the single initial electron position (figure 7(b)), the falling edge of the plasmonic pulse is reconstructed well, but there are ghost oscillations at the beginning of the pulse and the rms field error is ε = 0.27, indicating a rather poor reconstruction. When averaging over the initial electron z-position is included ( figure 7(d)), the ghost oscillations are reduced and the field is reconstructed with good accuracy (ε = 0.10). Although the regime of streaking is intermediate and the spectrum is broad, the spatial averaging of the initial electron position helps to remove the ghost oscillations at early delays and the reconstructed field follows in time the original field.
The amplitude of the reconstructed electric field in figure 7(d) is 3.2 × 10 9 V m −1 , which is a factor of 3 larger than the amplitude of the incident laser pulse. A larger field amplitude is expected as the plasmonic field inside the gap is about six times as much as the incident field. In the case of broad spectra from gold, the reconstructed field amplitude depends on the choice of energies used for calculating the centre of mass and varies between enhancement factors of 2 and 6. The magnitude of the reconstructed field also decreases as the XUV pulse duration is increased and would be larger for shorter XUV pulses.
Summary and conclusions
We have numerically analysed attosecond streaking from gold nanoantennas on sapphire. The regime of streaking from the antennas is between the standard oscillatory regime and the instantaneous regime and depends on the electron energy and dimensions of the antennas. For the investigated case, the regime is closer to the standard oscillatory regime. However, for early time delays, the resulting streaked spectra from antennas can contain ghost oscillations because of the electron leaving the enhanced field region before the end of the plasmonic pulse. These oscillations do not follow the plasmonic electric field. When spatial averaging of the initial electron z-position is included in the calculation, the ghost oscillations can be reduced and the field can be reliably reconstructed. The spatial averaging will be automatically present in an experiment; therefore, the measured streaked spectra from nanoantennas are expected to be very similar to streaked spectra in the standard oscillatory regime.
The plasmonic field can be retrieved from the streaked spectra by a centre-of-mass analysis, which reconstructs well the temporal profile, but the amplitude can be slightly underestimated. For electrons from gold, the spurious oscillations at early delays are suppressed by spatial averaging and the rms field error is ε = 0.10, indicating a rather reliable reconstruction. One can see that although the regime of streaking is intermediate because of the finite dimensions of the enhanced field, field reconstruction should be possible and experiments are feasible to determine the plasmonic field. | 6,276.2 | 2011-08-03T00:00:00.000 | [
"Physics"
] |
Validation of a simple isocratic HPLC-UV method for rifampicin and isoniazid quantification in human plasma
© 2018 Laura Carolina Luciani-Giacobbe et al. This is an open access article distributed under the terms of the Creative Commons Attribution License -NonCommercial-ShareAlikeUnported License (http://creativecommons.org/licenses/by-nc-sa/3.0/). *Corresponding Author María Eugenia Olivera, Haya de la Torre y Medina Allende. Edificio Ciencias 2. Ciudad Universitaria (X5000HUA), Córdoba, Argentina. E-mail: meoliver @ fcq.unc.edu.ar Validation of a simple isocratic HPLC-UV method for rifampicin and isoniazid quantification in human plasma
INTRODUCTION
The treatment of tuberculosis is currently complex and prolonged (Beltrame et al., 2014;World Health Organization, 2016), and consists of the administration of rifampicin (RIF) and isoniazid (INH) in a fixed-dose combination (FDC) as immediaterelease solid dosage forms (tablets or capsules) for 6 months.They are also associated with ethambutol (ETA) and pyrazinamide (PIR) within the first 2 months (World Health Organization, 1997).
Although the treatment is still effective (Lu et al., 2017), it has multiple associated problems that compromise its effectiveness.One of the main drawbacks of the treatment is the low and variable bioavailability of RIF, which is mainly related to the poor wettability and the slow dissolution rate of the solid, due to the different properties of RIF polymorphs, its hydrophobicity, and pH-dependent solubility.The influence of some excipients on the performance of the solid dosage form, the inter-individual variability in the absorption and metabolism of RIF, are also associated with bioavailability problems (Becker et al., 2009;Singh et al., 2006).Additionally, its degradation at gastric pH, accelerated by the presence of INH in the formulation, has a negative impact on the bioavailability (Singh et al., 2006).
As RIF and INH are still effective, overcoming the main technological drawbacks of these therapeutic agents in order to enhance compliance and adherence as well as improve the effectiveness of the drugs is an interesting challenge for the pharmaceutical technology area.
In order to ensure an adequate performance of current and innovative formulations, the availability of validated bioanalytical methods is important for the evaluation of their bioavailability (Agrawal and Panchagnula, 2005) as well as for therapeutic drug monitoring (Alsultan and Peloquin, 2014;Verbeeck et al., 2016).
Liquid chromatography coupled to mass spectrometry (LC-MS/MS) instrumentation is selective and sensitive.However, it is expensive and requires highly skilled expertise that restricts its use primarily to high volume or research laboratories.Therefore, HPLC-UV is still the most common and economical method for the simultaneous determinations of anti-tuberculosis drugs.The gradient module allows separating complex samples.However, it is an expensive instrumentation, the methods are complex to develop and transfer, and are generally considered to be inherently slower techniques than isocratic elution, since the column must be equilibrated with at least 10 column volumes of initial eluent before reliable retention can be obtained in the next run (Schellinger and Carr, 2006).Thus, many of these methods cannot be reproduced (Kim et al., 2015;Zhou et al., 2010).
Just a few isocratic HPLC methods have been published, and none of them have been validated for RIF and INH quantification in pharmacokinetic studies.These methods are inexpensive and accessible to developing countries, in which tuberculosis is more prevalent.
In this context, there is a necessity to develop and validate an HPLC method for the reliable quantification of RIF and INH in plasma samples.The purpose of this study is to develop a simple, reproducible isocratic HPLC-UV method for the determination of RIF and INH concentration in human plasma and validate it while fulfilling international guidelines.
Chemicals and materials
RIF and INH of pharmaceutical grade were acquired from Parafarm® (Argentina) and Droguería Libertad (Argentina), respectively.Human plasma was kindly donated by Laboratorio de Hemoderivados, Córdoba-Argentina.Rifamycin (RIM) and atenolol (AT), of pharmaceutical grade, were purchased from Parafarm® (Argentina) and used as internal standards.A derivatizing agent of INH, p-hydroxybenzaldehyde (pro-analysis grade, Sigma-Aldrich, Germany), was used.For the sample processing and HPLC quantification, acetonitrile (HPLC grade, Sintogran®, Argentina), methanol (HPLC grade, Sintogran®, Argentina) and Milli Q water were used.
All other reagents were of pro-analysis quality.
HPLC apparatus
The instrument consisted of a Waters 1525 pump, a Waters 717 plus autosampler, a Waters 1500 series column heater and a Waters 2996 photo array detector (PDA) (Waters Corp., Milford, USA).Data acquisition was performed by the Empower Software® data registration.
System 1
The analytical column was a reversed-phase Luna C18 (250 mm × 4.6 mm i.d., 5 μm particle size, Phenomenex, Torrance, CA, USA) maintained in the column oven at 30°C and protected by a Phenomenex® Security Guard precolumn.The mobile phase consisted of methanol: potassium phosphate buffer (pH 7.00; 0.02 M) (75:25, v/v).The elution was carried out isocratically at a flow rate of 0.5 mL/min.The mobile phase was filtered through a 0.45 μm Millipore Durapore® filter (Billerica, MA, USA) and degassed by vacuum prior to use.
Preparation of standards and quality control samples
Working solutions of RIF were prepared in MilliQ water in concentrations between 0.16 and 200.00 μg/mL.These solutions were kept in light-tight flasks and used immediately.Calibration standards of 0.16; 0.31; 1.51; 4.68; 7.56 and 37.80 μg/mL of RIF were prepared extemporaneously by adding an appropriate volume of each RIF working solution in 1 mL of human plasma.This range of the standard curve was chosen to reflect the plasma concentrations expected in a typical 24 h pharmacokinetic profile post administration of 300 or 600 mg of RIF tablets (Avachat and Bhise, 2011;Wang et al., 2013;Xu et al., 2013).
Working solutions of INH were prepared in Milli Q water in concentrations between 0.57 and 230.00 μg/mL.These solutions were kept in light-tight flasks stored under refrigeration and used within four weeks (Agrawal et al., 2001).Calibration standards of 0.89; 3.57; 8.92; 17.84; 35.68; 71.36 μg/mL of INH were prepared extemporaneously by adding an appropriate volume of each INH working solution in human plasma.This range of the standard curve was chosen to reflect the plasma concentrations expected in a typical 24 h pharmacokinetic profile post administration of 150 or 300 mg of INH tablets (Agrawal et al., 2001;Hee et al., 2015;Wang et al., 2013;Xu et al., 2013).
Determination of RIF
Sample preparation was performed by protein precipitation of 500 μL of calibration standards of human plasma with 1400 μL of acetonitrile-water (70:30, v/v) containing ascorbic acid at 179 μg/mL and RIM at 36 μg/mL (which was the internal standard).After vortex and centrifugation for 10 min (120 Hz, room temperature), 1 mL of supernatant was collected, placed into a 1.5 mL polypropylene tube and filtered with a 0.22 μm membrane filter Millipore Durapore® (Billerica, MA, USA).One hundred microliters of this solution were injected into the HPLC system and quantified at 339 nm.
Determination of INH
One hundred microliters of AT (0.16% w/v) and 200 μL of a methanolic solution of p-hydroxybenzaldehyde (1.5% w/v), which acted as derivatizing agent, were added to 250 μL of human plasma samples.Proteins were precipitated with 400 μL of 10% v/v trifluoracetic acid solution and 1 mL of methanol.After vortex and centrifugation for 10 min (120 Hz, room temperature), 1.00 mL of supernatant was filtered with a 0.22 μm membrane filter Millipore Durapore® (Billerica, MA, USA).A hundred microliters of this solution were injected into the HPLC system and quantified at 273 nm.
Method validation
All validation procedures were performed according to US Food and Drug Administration (FDA) guidance for the validation of bioanalytical methods (FDA, 2001).The validation parameters were specificity, linearity, sensitivity, accuracy, precision, recovery and stability in human plasma.
Selectivity
Selectivity was studied by comparing chromatograms of six blank plasma samples with plasma samples spiked with RIF and INH.Each blank sample was tested for interference, and selectivity was ensured at the lower limit of quantification (LLOQ).
In addition, the resolution (R) was determined as a measure of separation between the peaks of interest, using equation 1 described in the Farmacopea Argentina (Ministerio de Salud, 2003): Equation 1 where t rA y t rIS are the relative retention times (in minutes) and w A and w IS are the peak widths (in minutes) of the analyte of interest (RIF or INH) and their internal standard, respectively.The acceptance criterion was a resolution value ≥ 2.
Linearity
Calibration curves were obtained on three different days by analyzing standard plasma samples of each analyte at six concentrations and processed by weighted (1/x) least squares linear regression.The linearity of each method-matched calibration curve was determined over the designated concentration range.
Accuracy, precision and lower limit of quantification
The precision (presented as the coefficient of variation; CV) and accuracy of the assay were assessed by analyzing quality control samples at three concentrations.Precision is expressed as the coefficient of variation: CV = (SD/mean) × 100, and accuracy error is expressed as the bias: [(measured concentration − nominal concentration)/nominal concentration] × 100.The intra-day CV and accuracy of the method were evaluated based on the analysis of five samples.The CV and accuracy for inter-day assays were assessed at the same concentration and repeated on three different days.
The LLOQ was defined as the lowest concentration of analyte which can be determined with acceptable accuracy and precision.The signals from the analytes found in the LLOQ sample should be at least 5 times greater than the signal of the blank sample (FDA, 2001).
Acceptance criteria were: bias within ± 15 % of the nominal value and within-and between-run precision lower than 15% (except 20% for the LLOQ).
Recovery
Recovery was assessed at two concentrations for each drug (1.67 and 41.79 μg/mL of RIF and 4.41 and 88.11 μg/ mL of INH), by comparing the peak areas of triplicates at each concentration for RIF and INH standards in Milli Q water and standards spiked before protein precipitation in human plasma.Recovery was calculated as the ratio of the mean peak area of an analyte spiked into plasma before extraction and the mean peak area of the same analyte spiked in Milli Q water at the same concentration, multiplied by 100.
Stability
Short-term, post-preparative and freeze-thaw stability of the samples was assessed at low and high concentrations for each analyte (1.67 and 41.79 μg/mL for RIF and 4.41 and 88.11 μg/mL for INH).
Freeze-thaw stability was determined in triplicate at 1, 2 and 3 cycles of freeze-thawing.At time zero, after taking an aliquot for quantification, the enriched plasma was separated into 2 mL Eppendorf tubes and frozen at −20°C.After 24 h, samples were thawed at room temperature and kept sheltered from light for about 1 h to ensure temperature balance.Immediately thereafter, an aliquot was taken, processed and quantified (first freezethawing cycle).This cycle was repeated twice, completing the second and third freeze-thawing cycles.Short-term stability was determined from these plasma samples kept at room temperature for 6 h (expected time for processing of the samples each day) and quantified by HPLC (n = 3).The post-preparative stability was determined after 24 h storage at room temperature in the autosampler.
The responses obtained for the fresh samples and those submitted to the stability studies were compared and the acceptance parameter was defined as a bias within ± 15 %.
Performance of HPLC system
Our study separated analytes from plasma sample by protein precipitation with trichloroacetic acid and organic solvents, which was an easy, rapid and convenient method (Unsalan et al., 2005), avoiding liquid-liquid or solid-phase extractions of the drugs, which would increase the sample processing complexity (Balbão et al., 2010;Hee et al., 2015;Walubo et al., 1994).Panchagnula et al. (1999) quantified RIF and desacetyl rifampicin in human plasma, using an HPLC-UV isocratic method.Thus, the current bioanalytical method was developed from the above chromatographic conditions and small modifications were made to improve the resolution of the chromatographic peaks.An increase in the proportion of methanol in the mobile phase (from 65 to 75) decreased the retention time of RIF and the internal standard from 11.9 and 7.9 min to 5.8 and 4.7 min, respectively.The flow rate reduction from 1 to 0.5 mL/min allowed a better resolution of RIF relative to RIM (from 2.3 to 3.2) while maintaining the symmetry of the peaks.
Preliminary studies showed that INH, which is a hydrophilic compound, elutes with the front of solvent and plasma impurities.Thus, we performed a derivatization step, which consists in the reaction between the aldehyde group of p-hydroxybenzaldehyde and the hydrazine group of INH to obtain a more hydrophobic INH hydrazone.This strategy was previously informed by Gupta (1988) and allowed to enhance resolution, with sensitive detection of INH (retention time 9.6).This approach is a usual procedure to modify the retention time and permitted INH quantification.In addition to p-hydroxybenzaldehyde, other common INH derivatizing reagents such as cinnamaldehyde, salicylaldehyde, and 2-fluorene-carboxaldehyde have been informed (Agrawal et al., 2001;Gupta, 1988;Unsalan et al., 2005).
Initially, as described in the literature, pyrazinamide was used as an internal standard.However, its chromatographic peak was not well resolved with respect to the solvent front, either using a 150 mm or 250 mm column or varying the proportions of methanol:water in the mobile phase from the ratio (85:15 v/v) to (70:30 v/v).Then, other internal standards of solubility and Log P similar to the INH hydrazone, such as paracetamol, isonicotinic acid, hydrochlorothiazide and AT were selected.AT was the only compound that did not react with p-hydroxybenzaldehyde and presented a unique and symmetrical signal at appropriate retention times, allowing an adequate quantification of INH under the established conditions.
Despite other methods (Balbão et al., 2010;Hee et al., 2015;Khuhawar and Rind, 2002;Walubo et al., 1994), the plasma processing of RIF and INH samples did not require an incubation period, extraction or concentration step, so the process was extremely fast.The runtime for RIF was 12 min, while that for INH was 25 min and retention times in min were: RIF = 7.72; and INH = 9.56.In summary, the samples had an acceptable quantification time.The use of internal standards did not affect the linearity in the concentration ranges used.
Plasma fractionation is a common methodology (Kim et al., 2015;Unsalan et al., 2005;Walubo et al., 1994) since the determination of both analytes in a single chromatographic run is complex.For example, Unsalan et al. (2005) intended a joint quantification of RIF, INH, and pyrazinamide in a unique run.However, the method had to be modified by performing two chromatographic runs which allowed the quantification of INH in one of these and pyrazinamide and RIF in the other.
Selectivity
The selectivity of the chromatographic system 1 is illustrated in Figure 1, where a complete separation of RIF and RIM can be observed, with a resolution of 3.2.By comparing the blank and plasma samples enriched with RIF and RIM, no interference from the biological matrix below the LLOQ was confirmed.As can be seen in Figure 1B, no signals of INH were observed in the chromatograms of RIF quantification.
The selectivity of the INH chromatographic system 2 is illustrated in Figure 2, where a complete separation of the derivatized compound of INH (INH hydrazone) and AT can be observed, with a resolution of 11.9.By comparing the blank and plasma samples enriched with INH and AT, no interference from the biological matrix below the quantification limit was confirmed.The method was also specific as no interference was found with samples containing RIF (Figure 2B).
Linearity, accuracy, precision and lower limit of quantification
The linearity results are presented in Table 1 and showed a good correlation between the peak relation area of RIF/RIM and INH/AT with the drug concentration, with r 2 being greater than 0.9989 for all curves.The calibration curves were linear over 0.16 to 37.80 μg/mL of RIF and 0.89 to 71.36 μg/mL of INH.
As shown in Table 2, the intra-day and inter-day accuracy was within ±15%, and the intra-day and inter-day precision for each level of the tested concentration did not exceed 10.39 % for RIF and 8.42 % for INH.The LLOQ was 0.31 µg/mL and 0.89 µg/ mL of RIF and INH, respectively.
The LLOQ of RIF and INH were similar to those previously reported for liquid chromatography UV methods (with LLOQ ranging from 0.1 to 0.47 µg/mL of RIF and 0.1 to 1.8 µg/ mL of INH) (Balbão et al., 2010;Goutal et al., 2016;Khuhawar and Rind, 2002;Melo et al., 2011;Milán-Segovia et al., 2007;Prasanthi et al., 2015).Moreover, this method is more accessible than LC-MS/MS and also offers a simple sample preparation with reliable specificity.
The advantage of this study was the use of a minimal volume of plasma (750 µL for each full analysis), less than those employed in other similar studies, for example, Walubo et al. (Walubo et al., 1994) used 2000µL and Khuhawar et al. (Khuhawar and Rind, 2002) used 1500 µL.Therefore, the method presented herein is suitable not only for carrying out pharmacokinetic tests in humans but also in animal models like dogs and baboons (Goutal et al., 2016;Wang et al., 2013).
It is interesting to note that the plasma concentrations expected in a typical 24 h pharmacokinetic profile post administration of a fixed dose combination of RIF and INH tablets reach 28-30 µg/mL of RIF and 5-8 µg/mL of INH in animal models or humans (Agrawal et al., 2001;Wang et al., 2013;Xu et al., 2013).Our method was validated in a concentration range that includes these RIF and INH plasma levels and is more convenient than those proposed by Prasanthi et al. (2015), in which linearity was only ensured at concentrations between 60 to 150 µg/mL of RIF and 40 to 100 µg/mL of INH.Additionally, since this method was validated in a concentration range embracing therapeutic and sub-therapeutic RIF and INH plasma levels, it could be also convenient for accurate therapeutic drug monitoring of INH and RIF treatment in patients.In fact, it is known that the therapeutic levels of RIF should range between 8 and 24 µg/mL and the dose should be increased when plasma levels fall below 5.6 µg/mL since this value is correlated with therapeutic failure (Alsultan and Peloquin, 2014;Peloquin, 2002).
The use of this method for RIF and INH plasmatic determination in presence of PIR, ETA, or streptomycin, which are drugs usually combined in tuberculosis treatment, should be further explored.
The recovery of RIF and INH was high and similar in all concentrations studied.Therefore, the proposed extraction method was adequate and reproducible.
Stability
The RIF and INH human plasma samples were stable after 6 h of disposition at room temperature in working conditions and remained unchanged for further 24 h storage in the autosampler after protein precipitation.In addition, RIF plasma samples remained stable after three cycles of freeze/ thawing.Nevertheless, INH plasma samples remained stable only after two cycles of freeze/thawing, since the bias for INH samples of lower concentration in the cycle 3 was greater than 15% (Table 3).
Unlike what happens in acidic aqueous solutions, RIF was more stable in plasma (Alsultan and Peloquin, 2014), because it is highly bounded to proteins (Boman and Ringberger, 1974), and also this medium has a pH at which hydrolysis of RIF would be minimal.In fact, RIF supported one freeze/thawing cycle more than INH.In consequence, the stability of the RIF and INH plasma samples during the usual processing and quantification times was assured.
In summary, the developed method was a useful analytical tool for the quantification of RIF and INH in pharmacokinetic assays.This method demonstrated a high precision, selectivity, and stability using small volumes of plasma, in agreement with the FDA guideline (FDA, 2001).
CONCLUSIONS
This paper describes a reproducible HPLC method which enables the determination of RIF and INH in plasma samples.The sample preparation method was simple since it requires only a protein precipitation without needing either an extraction or concentration step.
The chromatographic systems developed using an isocratic method and UV detection showed good selectivity, robustness, and stability, and are suitable for a reliable determination of these compounds.The HPLC assay methods presented here could be successfully applied to the determination of the pharmacokinetic profiles after oral administration of fixed dose combination tablets of RIF and INH, with a potential applicability in the drug monitoring of tuberculosis treatment.
Table 1 :
Regression parameters for rifampicin (RIF) and isoniazid (INH) calibration standards in human plasma (0.16 to 37.80 μg/mL of RIF and 0.89 to 71.36 μg/mL of INH, respectively; n = 6 in each case).Data were fitted by linear regression with the least squares method.
Table 2 :
Accuracy (bias %) intra-day and inter-day precision (CV %) of RIF in the range evaluated.
Table 3 :
Stability of human plasma samples containing RIF and INH (n = 3).The acceptance criteria was a bias within ±15%. | 4,705.4 | 2018-07-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Graduate unemployment in South Africa : Perspectives from the banking sector
Orientation: South Africa is experiencing growth in its graduate labour force, but graduateunemployment is rising with the overall unemployment rate. Graduate unemployment isproblematic, because it wastes scarce human capital, which is detrimental to the economy inthe long run. Research purpose: This study explores the perceived causes of graduate unemployment fromthe perspective of the South African banking sector. Motivation for the study: Researchers have conducted various studies on graduateunemployment in South Africa and across the globe, but few studies have beenconducted on the causes of graduate unemployment. There appear to be some gaps in theliterature; therefore, other problems and solutions to graduate unemployment have to beexplored Research approach, design and method: The researchers followed a survey design. Questionnaires and face-to-face interviews were used as research instruments to identify theperceived causes of graduate unemployment in the banking sector of South Africa. Researchparticipants were unemployed graduates, recently employed graduates and graduaterecruitment managers in the banking sector. Main findings: The study shows that several factors are perceived to be the causes of graduateunemployment in the South African banking sector. These include: skills, institution attendedby graduate and differences in expectations from employers and graduates.Practical/managerial application: The findings have implications for educational institutionsand companies that are encouraged to consider possible solutions to resolving the causes ofgraduate unemployment. Contribution/value-add: This study is one of the first papers to investigate the causes ofgraduate unemployment in the South African banking sector. It provides a rich platform forfurther studies and replication in other sectors, especially within the African context.
Introduction
Unemployment is a socio-economic challenge, with economic costs and many other challenges.It reduces economic welfare, reduces output and erodes human capital.These costs make unemployment a high priority in countries such as South Africa.
The South African economy has been experiencing rising unemployment over many years.In 2011, the unemployment rate was at 24.2% and in 2013 it had increased to 25.2% (Statistics South Africa, 2014).At the same time, there was a structural change observed in labour demand trends shifting towards high-skilled workers (Bhorat & Jacobs, 2010).This structural change has led to the conclusion that the unemployment problem in South Africa is structural, in the sense that the poorly educated workers, who constitute the vast majority of the labour supply, cannot find employment due to insufficient demand for low-skilled resources (Pauw, Bhorat & Goga, 2006).
The youth faces even higher rates of unemployment when compared to older members of the labour force all around the world.Since the global financial crisis this is even more pronounced (Levinsohn, Rankin, Roberts & Schöer, 2014).In the case of South Africa, this phenomenon is particularly acute.
Young South Africans, who are defined as those within the age group of 15-34 years (Department of Basic Education [DBE], 2011), have, however, become better educated over the last decade.This trend can be observed in the increasing enrolment rate at tertiary educational institutions.According to Broekhuizen and Van der Berg (2013), 57.9% of the labour force has completed their education at a tertiary level.This implies that the size of the graduate labour force is increasing in accordance with the structural change in demand for high-skilled workers (Daniel, 2007).
However, unemployment amongst young South Africans is still very high and is increasing.It furthermore displays significant differences across racial lines (Levinsohn et al., 2014).Levinsohn et al. (2014) further indicate that in the second quarter of 2012, the broad unemployment rate for those between 20 and 24 years of age was higher than 60%.In the case of Africans in this category the figure is even higher (almost two-thirds).This rate increased by more than ten percentage points between 2008 and 2012.This is in stark contrast with the less than 20% in the white racial category for the same age group.Given the increasing level of both youth unemployment and graduate unemployment in particular, the current study investigates unemployment amongst graduates in the South African banking sector.
The rationale for studying unemployed graduates
Despite the growth in the graduate labour force, graduate unemployment in South Africa appears to be rising together with the overall unemployment rate.Although graduate unemployment only accounts for 5.9% of the changes in overall unemployment from 1996 to 2012, the actual unemployment rate for this education group has increased from 5.4% during 1995 to 7% in 2012 (Broekhuizen & Van der Berg, 2013).
Although the absolute size is small, this increase shows that graduate unemployment has increased over time.It is important to improve the understanding of the reason for this phenomenon.In addition to the statistics highlighted, graduate unemployment is also an important area of study because theory suggests that the South African economy experiences severe skills shortages, whilst the economy is unable to generate sufficient job opportunities for graduates.This seems to contradict studies suggesting that the higher a participant's education level, the higher the probability of finding employment (Oosthuizen, 2005).It is therefore important to identify the necessary requirements for graduates to become employed.
In this study, a graduate is defined with reference to degree holders, including postgraduate degrees.Unemployed graduates will be denoted as recently graduated youths (16-35 years) without employment.
Studying graduate unemployment is important as the ages 20-24 are the ages when the trajectory of labour market involvement and outcome are shaped (Levinsohn et al., 2014).Obtaining initial work experience is a key objective of young graduates in this age category.Failing to do so may lead to undesirable outcomes which may be irreversible in many cases.Banerjee, Galiani, Levinsohn, McLaren and Woolard (2008) suggest that previous work experience is correlated with whether one has a job currently or not.Therefore, finding a first job is fundamental for people's lifetime work trajectory (Levinsohn et al., 2014).
Graduate unemployment is also an important area of study because unemployment amongst graduates is potentially damaging to the economy.The South African government is aware of this problem and has used different policies in an attempt to address this problem.One of the strategies that were designed to solve the problem of graduate unemployment is known as the Joint Initiative for Priority Skills Acquisition (JIPSA).Its strategy proposes the implementation of special training programmes, bringing back retirees or expatriates to work in South Africa and attracting new immigrants (Pauw et al., 2006).
This strategy has been able to provide some solution to the problem of graduate unemployment.It has, however, not been able to decrease graduate unemployment, because there are more graduates searching for jobs (Pauw et al., 2006).
Various researchers have conducted studies on this kind of unemployment.Naong (2011), for example, analysed the reason why graduates struggle to find employment and found that most graduates struggle to find employment because they have chosen the wrong field of study and because of other weaknesses in the South African educational system.He proposed the promotion of an entrepreneurial culture in schools to solve the problem.Sha (2006) studied graduate unemployment amongst graduates with degrees in computer science in Malaysia.Conclusions on reasons for graduate unemployment were based on the perspectives of both graduates and employers.The results revealed that graduates are of the opinion that employers are to be blamed for their being unemployed, whilst employers perceive the attitude of computer science graduates towards work as negative.This makes employers reluctant to hire new graduates.
Although these and other studies have been able to analyse the graduate unemployment situation in South Africa and elsewhere, there appear to be some gaps in the literature.Most of the solutions to graduate unemployment are centred on education, whilst education is not the only cause of graduate unemployment in South Africa (Lie, 2007).As a result, other problems and solutions to graduate unemployment need to be explored.Not many studies have been conducted recently on graduate unemployment in South Africa, keeping researchers and policymakers uninformed.This study on graduate unemployment will also explore other possible causes of graduate unemployment and its solution.
In order to contribute to the body of literature on graduate unemployment, the objective of the study is to investigate the labour market for graduates in South Africa with a focus on the banking sector.The focus is on the banking sector because it is one of the largest employers of labour with various higher qualifications and skills in South Africa.Standard Bank, for example, has 52 000 employees.The banking sector is also considered amongst the top contributors to long-run economic growth (BANKSETA, 2011).
The study will explore the supply side of the labour market, which consists of unemployed graduates and recently employed graduates, as well as the demand side, represented by employers in the banking sector.This research intends to study the basic aspects that are often overlooked or overstated when addressing graduate unemployment.
Literature review on graduate unemployment
Various explanations are provided for the causes of graduate unemployment.Some ascribe it to the lack of employment that is caused by a lack of employability, type of qualification obtained as well as field of study, quality of secondary school education, quality of tertiary education, high expectations, job search and work experience.
Various authors have offered definitions for employability.Weligamage (2009) and Sha (2006) agree, however, that employability relates to the skills and attributes needed to gain employment and progress in an industry.
A survey conducted on graduates by Griesel and Parker (2009) highlighted four main categories of skills and attributes that employees look out for when recruiting graduates.Basic skills, intellectual ability, workplace skills, applied knowledge and interactive skills were regarded as most important.
South African graduates are often unsuccessful in the recruitment phase, because they lack some of the required skills.Pauw et al. (2006) argue that many of these students who lack the required skills come from historically black institutions where they did not develop these skills or acquire work experience.As a result, graduates become disadvantaged because the skills they learned are not required in the working environment.Lie (2007) argues that graduates should not be entirely blamed for the unemployment condition.Graduates, employers, government and universities are all involved.Graduates are more academically oriented and lack awareness of the latest developments and applicable skills.Graduates generally believe that their education and skills are sufficient and universities consider their graduates to be well prepared for the workplace.At the same time, employers perceive graduates to lack the vital skills for employment.Such perspectives must be addressed to enhance the employability of graduates.
The quality of secondary education was also highlighted as another reason for graduate unemployment.According to Mlatsheni (2005), fewer students who matriculated passed with exemption and students are regarded as functionally illiterate.This means that an average Grade 12 student who has just matriculated does not possess adequate writing, mathematical and communication skills to perform at university level because the quality of secondary school education is low.As students are not prepared to enter and succeed in further higher learning, high dropout rates are experienced (Pauw et al., 2006).
Differences in the quality of tertiary institution also lead to graduate unemployment.These institutions can be grouped into historically white institutions (HWIs) and historically black institutions (HBIs).Graduates from HWIs have much better employment prospects than those from HBIs.HWIs graduates are absorbed into the labour market faster after obtaining degrees (Bhorat & Visser, 2010).
Another reason why HWIs graduates are more employable than those from HBIs relates to the fact that HBIs enrol students in fields of study with low employment prospects.Employers are biased towards employing students who have graduated from certain universities because of their perception of the universities' educational standards and culture.Generally, employers have the perception that 'HBIs are much poorer in ensuring success in the labour market for their client base than HWIs' (Bhorat & Visser, 2010, p. 5).
Graduate unemployment may also be due to graduates' unrealistically high expectations.They usually assume that their qualifications must ensure high salaries and managerial positions immediately (L.M. Luan, personal interview, 24 October 2012).Sirat and Shuib (2012) found that many graduates expect to be paid high salaries, but have a bad attitude towards work.They do not understand the importance of the process of gaining work experience over time.With such expectations, no graduate would wish to start at a low level.Consequently, they select jobs and prefer to remain unemployed until they get their dream employment (Pauw et al., 2006).From the employer's perspective, they view graduates as new entrants with a degree, but lacking the required soft skills and experience to work effectively in the workplace (L.M. Luan, personal interview, 24 October 2012).Employers, therefore, pay less and train the graduates according to the needs and demands of the workplace.High expectations amongst graduates will therefore increase graduate unemployment if the graduates' perceptions persist.
Another reason for high graduate unemployment relates to job search activities.Moleke (2010, p. 93) defines job search as a 'normal occurrence which work seekers use as a form of investment to enhance their position in the labour market'.Job search is therefore seen as an activity that requires time and effort to achieve a goal, which in this case is a job.For graduates, the process of searching for a job takes longer, because graduates are seen as new entrants without working experience.They therefore need to go through job advertisements, complete application forms, write psychometric tests in some cases, attend interviews and write a formal assessment before a permanent offer is made (L.M. Luan, personal interview, 24 October 2012).Moleke confirms this.He found that over half of the unemployed graduates had been searching for employment for more than a year and that African graduates search longer, some having searched for more than 2 years (L.M. Luan, personal interview, 24 October 2012).Exploring the possible reasons for this observation is an important research agenda going forward.
Method
This section describes the research method followed in this study.
Research participants
The population from which the sample was drawn was divided into two groups, namely: the graduate group (the unemployed graduates and recently employed graduates) and the human resource (HR) group (graduate recruitment managers).Graduate recruitment managers were selected for the HR group because they play a significant role in recruiting, managing and training graduates.They were selected from the four biggest banks in South Africa, namely ABSA, Nedbank, First National Bank and Standard Bank of South Africa.These four banks were chosen because they are the largest employers of graduates in the banking sector.Each bank annually employs between 100 and 200 graduates (L.Ntuli, personal communication, 28 October 2012).Graduates with diverse qualifications are employed, ranging from engineering, mathematics, history, economics and finance.These four banks were therefore ideal areas of investigation for possible causes of graduate unemployment in the South African banking sector.For the second group, the graduate recruitment managers, the study made use of the purposeful sampling method.The purposeful sampling method was selected based on the fact that, due to their key position in the recruitment cycle, the graduate recruitment managers will assist in providing information that is important to the study (Greener, 2008).
The unemployed graduates were selected from the South African Graduate Development Association (SAGDA) database.SAGDA's database was used because it contains information such as the names of graduates, age, gender, qualification, major subjects, location of an individual and institution.Their database also has 2405 records of unemployed graduates (Z.Ngewu, personal communication, 28 October 2012).All this information assisted in the compilation of an unbiased sample of unemployed graduates and postgraduates with diverse qualifications.A sample size of 400 unemployed graduates and 400 recently employed graduates (total sample of 800) was envisaged for the research.This study used a stratified random sampling method for the first group, the graduates, to determine the sample units, because the total population is divided into groups that differ along selected characteristics such as academic qualifications and types of institutions.The sampling was done by separating each graduate group into qualifications offered by universities and selecting a simple random sample from each group.According to previous conversations with graduate recruitment managers (R. Tshidi, personal interview, 10 September 2012), 50% of graduates employed by the banks have Bachelor of Commerce (BCom) qualifications, 40% of graduates have Bachelor of Science (BSc) degrees, whilst 10% of the graduates employed by the banks have Bachelor of Arts (BA) qualifications.
Based on the percentages of recruitment by qualification, 50% of the sample in the graduate group (unemployed and recently employed graduates) were BCom graduates, 40% of the graduate group were BSc graduates and 10% of the graduate group were BA graduates.The authors of this study acknowledge that different qualifications have various subjects and in order to account for the subjects, this study randomly selected students with different subjects in each qualification group.For example, students with a major in economics, marketing or finance will be randomly selected from the BCom group.Because their qualifications theoretically enable them to be employed in various sectors, the respondents' endeavour to be employed in the banking sector as well was specifically tested by including a question in the survey instrument to verify that the person did apply at one or more of the banks included in this study.
Recently employed graduates were selected from the particular banks.They were selected because they have passed through the process of applying for a job as a graduate and were able to assist in identifying possible requirements to gain employment as a graduate and to prepare for the workplace.
Research approach
For this investigation, the researchers applied a surveybased approach.The survey was based on questionnaires distributed to employed candidates who had recently graduated and unemployed graduates, as well as face-to-face interviews conducted with graduate recruitment managers of the banks.The intention was to gain an understanding of the perceived causes of graduate unemployment in the banking sector.
Research instruments
This research utilised survey questionnaires that varied according to the groups.Closed and open-ended online and manual questionnaires were designed for the graduate group, unemployed and recently employed graduates, whilst openended and face-to-face interview questionnaires were used for the HR manager group.The questions were drawn from the literature on the causes of graduate unemployment and an understanding of the banking sector.
The survey questionnaire used in the case of unemployed graduates consisted of five separate sections.The first was designed to collect demographic data for cross-sectional analysis.Next followed questions relating to the choice of university and degrees, followed by two sections on the application and job search process and the final section considered the causes of graduate unemployment.
The survey questionnaire used in the case of recently employed graduates was designed in the same way, except for the sections focusing on the transition to the workplace and their expectation versus reality.The questionnaire also consisted of five separate sections, starting with demographic and eductional sections designed in the same way as described above.In addition, candidates were asked about the number of job offers received and the time period from application to offer.Section D asked candidates about their experience from study to employment, whether the skills that they had acquired during their period of study were adequate and whether their employers had the same views about their skills.The last section compared the expectations of graduates before they were employed to their experience in the workplace.
The graduate manager survey questionnaire was unstructured.The questions formulated in this questionnaire considered the company's intention to hire graduates and the importance of recruitment strategies, such as the graduate recruitment programme annually designed for graduates.The next questions focused on the selection process when hiring graduates as well as the post-selection process.This covers what happens after graduates are hired, the cost involved after graduates are hired, the skills level of the graduate before and after the graduate has been hired, the retention rates of graduates and the incentives offered to retain graduates.
Data collection procedure
For the unemployed and recently employed graduates, an online survey questionnaire was created with the help of an online research website known as Question Pro.The link to the survey was customised and posted on the SAGDA website (SAGDA, 2012).An online survey was chosen because it is easily accessible.For those who did not have access to the Internet, the survey was manually printed and distributed to a list of graduates, which was made available by SAGDA.
The HR managers were also approached to administer the survey during the training of graduates, when they felt it would be most convenient.For the graduate recruitment managers, questions were sent via email and contact details were collected during a South African Graduate Recruitment Association's (SAGRA) conference.Meetings were booked in order to conduct face-to-face interviews.
Data sorting process
From the original 800 completed surveys, 170 were scrapped due to incompleteness and cases where the unemployed subsequently obtained employment.The survey on the graduate recruitment managers was easier to conduct, as the survey was interview based.In total, the remaining sample comprised 300 unemployed graduates, 330 recently employed graduates and four graduate recruitment managers.
Data analysis
The responses to each of the survey questions for the unemployed graduates and recently employed graduates were compiled in a comparative table to understand the common views, the disagreements of the graduates and the perceived changes in graduate unemployment.
Content analysis, which is a form of analysis that focuses on actual content and information (Stemler, 2001), was used to analyse the results derived from the graduate recruitment managers.Results were in the form of written responses and notes from the interviews.Each question in the interview was analysed in comparison to the responses from the four banks.After a within-group comparison the results of both groups (i.e.graduates and recruitment managers) were compared.Areas of disagreements and common interests between the groups were analysed to understand the dynamics of the graduate labour market.The results were then used to offer tentative policy recommendations about a possible solution to the causes of graduate unemployment.
Demographic characteristics
Even though the survey was sent to both men and women, there was a slightly higher response rate from men.Approximately half of the unemployed graduate respondents were in the age bracket 21 to 24.Based on age, the assumption could be made that many people graduate in the age bracket 21 to 24, but are unemployed.Of the recently employed graduates, 36% fell between the ages of 25 and 30.Based on these findings, another assumption is that more graduates between the ages of 25 and 30 gain employment in the banking sector.These assumptions are in line with Altbeker and Storme's (2013) findings that younger graduates are more likely to be unemployed than older graduates, because employers are more reluctant to hire younger graduates.
The view that there seems to be a change in graduate unemployment in the banking sector in terms of race is confirmed in Table 1.The banking sector now hires more black graduates, indicating a directional hiring change from hiring white graduates to hiring other races (Altbeker & Storme, 2013).Taking the opposing view, Moleke (2010) argues that racial discrimination might be a cause of graduate unemployment as there are still more white graduates in the banking sector than there are black graduates.The findings of the current study did not support this.The modern economy, especially the banking sector, now hires any race as long as the graduate possesses the required skills and knowledge.The following statistics provide examples of this trend.In 2013, Africans' share in junior management positions in one of the big four banks in South Africa rose to 72% from 68% in 2011 (Barclays Africa Group Limited, 2013); 80% of all new appointments are also from this group.As a result of higher demand for Africans in the banking industry turnover of African staff is slightly higher than the company's average as well (Barclays Africa Group Limited, 2013).
Choice of university and degree
The highest concentration of respondents in the unemployed graduate group came from the University of Fort Hare, followed by the University of Zululand, University of the Free State (UFS) and University of Limpopo.Graduate recruitment managers regard these universities as access universities that do not carry out regular career exhibitions or any form of graduate recruitment (L.M. Luan, personal interview, 24 October 2012).On the other hand, most of the recently employed graduates studied at the University of Johannesburg (UJ), University of Cape Town (UCT), University of the Witwatersrand (Wits) and University of South Africa.According to employers, these universities are visited regularly in terms of career awareness programmes, counselling and recruitment, because of their reputation of high educational standards, facilities and equipment.It then becomes easier for graduates from the non-access universities to gain employment and develop their careers.
The majority of the graduate group respondents were undergraduates and honours degree graduates, whilst very few possess master's degrees, as indicated in Table 2. Educational qualification alone is not the most important criterion for gaining employment, since the majority of the recently employed graduates are undergraduate degree holders.The sample of recently employed graduates showed that 32% have honours degrees, offering candidates a better chance to gain employment.It seems that a degree is not enough to gain employment.There are possibly other factors that qualify a graduate in the banking sector for employment.
Results show that most of the recently employed graduates have part-time, internship and voluntary work experience, which gave them an advantage.This may provide another explanation for the observation from the data, which shows more employability from age 25 onwards when additional work experience is more likely.This is therefore not merely a case of workplaces who do not want to appoint younger candidates.
Skills amongst graduates
The kinds of skills perceived to be acquired by both groups during their period of study are illustrated in Table 3.The majority of recently employed graduates felt that they had acquired communication, research and computer literacy skills during their period of study.The majority of the unemployed graduates felt that they had acquired the same set of skills as the recently employed graduates, and yet they are still unemployed.It is important to note that most of the skills ticked by the graduates during the survey were based on their feelings of the kind of skills they possess and not based on employers' perceptions.Therefore, graduates can be confident about their perceived set of skills, but this may be in contrast to the reality of the employers' experience.
Expectations versus reality
Most of the unemployed graduates and recently employed graduates expected to earn a salary within the salary bracket of R275 000-R300 000, as indicated in Table 4.This salary range does not differ from what the banks are currently paying graduates.The interviews with the graduate recruitment managers confirms that graduates are paid between R250 000 and R320 000 annually.Unemployed graduates' salary expectations are therefore not different from what employers offer, and Sirat and Shuib's (2012) argument about salary expectations could be questioned.Higher salary expectations, as suggested by Sirat and Shuib, do not necessarily hinder graduates from gaining employment in the case of the banking sector, as they desire what the industry offers.
Human resource group (demand side)
The survey revealed that all banks have the same intention to hire graduates, which relates to a successful supply of suitable candidates and the development of special skills to suit the needs of the company.The fact that graduates are recruited every year emphasises that banks are willing to employ graduates.Banks' intentions to hire graduates are also illustrated by their development graduate programmes that enable the recruitment of new graduates every year.
The survey results indicated that banks mainly visit UCT, UJ, the University of Pretoria, Wits and the University of KwaZulu-Natal during their graduate selection and recruitment process.These universities are regarded as the top universities in South Africa, with sound reputations and high quality of education.Most important is that the banks had employed a number of graduates from these universities in the past and they were excellent performers.
Another important factor that determines which universities should be visited or which graduates should be recruited is cost.It was found that banks spend a large amount of money on career exhibitions and career awareness programmes to entice graduates every year.The cost of hiring a graduate involves career exhibitions at universities, which cost R180 000 plus the training cost of hiring a graduate, which is between R300 000 and R650 000 per graduate when training is outsourced to a third party.In addition to the training cost, a graduate's salary ranges between R250 000 and R320 000 and the salary scales annually increase by 6.9%.These costs restrict the number of graduates employed every year and the number of universities visited.
The research concluded that banks prefer to employ slightly older graduates rather than younger graduates.Younger graduates are assumed to be too inexperienced and immature for the challenges of the working world (Biavaschi et al., 2013) and special recruitment and training programmes are expensive.This may again be linked to the notion that the somewhat older graduate may have acquired additional skills beyond ones linked to the degree itself.
It was also found that a graduate's qualification is regarded as an entry point of consideration, but it is not a guarantee for employment as a graduate needs more than academic training to function effectively.During the recruitment process, employers search for required skills and if it is sensed that a graduate lacks these required skills, the graduate will be declined.It can be deduced that graduates who rely exclusively on their qualifications tend to remain unemployed.
Discussion
The purpose of this study was to investigate possible causes of graduate unemployment in the banking sector through surveys of graduates and human resource managers.
It was found that slightly older graduates are more likely to be employed in the banking sector than younger graduates.Employers regard older graduates as more mature and experienced.It is therefore concluded that age might be a cause of graduate unemployment in the banking sector.The researchers do, however, consider the sample size as being insufficient to generalise in the affirmative.
In terms of race, South Africa is producing more black graduates every year and their unemployment rates are declining, compared to past years where more white graduates than black graduates were hired.This confirms L.M. Luan's (personal interview, 24 October 2012) findings.
In other words, there has been a directional change in hiring white graduates to hiring other races.There has also been a move towards equality, as the focus has shifted to the required skills and knowledge that graduates possess.
The results suggest that the kind of institution that a graduate attends and the perception of employers about those institutions leads to higher levels of graduate unemployment amongst graduates from access universities.
With the exception of the UFS, these are all HBIs.Graduates from the so-called non-access universities are preferred to those from HBIs because of employers' perceptions about these universities and the fact that they do not engage with employers to understand the right skills needed to be successful.When cost is taken into consideration, employers would rather spend their resources on candidates from non-access universities, as the cost involved in recruiting is expensive and it will only make sense to invest these large costs in candidates with the adequate skills.
The information provided by HR managers raises the question of endogeneity in terms of industry perceptions of top-class universities and the choice of where career exhibitions are held.The question is whether the existing perception of where top universities are located determines where career exhibitions are held.The exhibitions foster good relationships with those institutions, but at the same time this practice marginalises other institutions and thereby perpetuates the existing perceptions of them not producing good candidates.The cost factor and the fact that the universities with whom good relationships exist fulfil the demand for graduates may put candidates from other universities at a disadvantage.One way of overcoming this is for good graduates from other institutions to pursue postgraduate studies in the bigger centres where the perceived good quality institutions are situated.This may in itself reinforce the notion of these institutions providing top candidates.This aspect also requires further research and decomposition of the origin of postgraduate students at these preferred institutions.
The results have shown that educational qualification alone is not the only criterion for gaining employment in the banking sector.Advanced qualifications do, however, make candidates more likely to be employed.The survey shows that employers demand work experience, skills and certain qualifications based on their particular needs.This in itself can be a problem as the industry then does not contribute towards the building of experience.
In terms of labour demand, as the economy changes organisations change their business requirements, and changes in business requirements lead to changes in the qualifications and skills needed to suit the needs of the organisation.A graduate without a qualification or skill that is needed by the organisation will therefore remain unemployed and will have to wait until their qualification is in demand.It can be concluded that changes in business requirements will lead to changes in the demand for labour (in this case graduates) in banking.
The issue of skills was also identified as a possible cause of graduate unemployment in the banking sector.The survey showed that employers require certain skills such as leadership skills, soft skills, management skills and cultural fit.A graduate without these skills might therefore not be employed.The results also suggest that employers do not regard university-based skills as sufficient.University-based skills might be too academic or not job related.They must therefore be supplemented with other skills.
Finally, it can also be deduced that although differences in expectations from employers and graduates could be a reason for graduate unemployment, higher salary expectations do not necessarily hinder graduates from gaining employment, as they usually aim for what the sector is offering.The economy is driven towards graduates who place value on their skills and what they have to offer; therefore, higher salary expectations might not necessarily be the cause of graduate unemployment.
The authors realise the danger of generalisation from a sample of 630 respondents and believe that this study should be repeated continuously to obtain a more comprehensive understanding of graduate unemployment in the banking sector over time.However, the study draws on actual primary data and therefore adds valuable perspectives on the perceptions of graduate unemployment in the banking sector.
The results of this study have several implications for universities, managers and policymakers.Most important is the realisation that graduate unemployment is detrimental to the economy in the long run.This study suggests a number of recommendations to reduce graduate unemployment.
In the first place, it is recommended that the skills shortage be addressed by organisations and educational institutions working together towards the implementation of appropriate programmes that would provide graduates with the necessary skills that are required in practice.Professional bodies have a role to play here.They usually interact with universities and play a role in the accreditation of programmes.The appropriate body in the banking industry can play an important role in this regard.Given that graduates are drawn from different degree groups, this envisaged engagement around skill requirement needs to be particularly structured.
Organisations and educational institutions can also work together to provide career guidance to graduates to ensure a selection of employable choices of subjects and courses.
Working together can also assist in improving the quality of the so-called 'soft skills' that are also needed for graduates to successfully contribute to the productivity of organisations.These skills include team work skills, ethics in business interpersonal relationships, public speaking, meeting conduct and even time management skills.Government can help by providing graduate recruitment subsidies to improve the options of finding employment for young graduates.
The work done by SAGDA is potentially of significance here.SAGDA was hired by the Department of Trade and Industry (DTI) to implement and monitor their Unemployed Graduate Work Experience Placement programme.The programme aims to enhance graduates' employability by placing them in internship programmes.The programme assists graduates to gain relevant work experience through a 12-month internship programme in private sector companies.The DTI sources internship placements for unemployed graduates from companies that are receiving incentives from the DTI as well as those that are interested in the programme (DTI, 2014).The programme's vision is to provide relevant work experience to at least 1200 unemployed graduates over three intakes in the next 2 years from 2014 onwards.National Skills Fund within the Department of Higher Education and Training has approved funding to the value of R71 million over a 2-year period for monthly stipends of placed graduates, which will be managed by the DTI (DTI, 2014).
Conclusion
In final conclusion, the broader problem of graduate unemployment cannot be separated from the macro-economic picture of persistent long-run unemployment in South Africa.Unemployment in general also impacts graduates, although those leaving school early are even more adversely affected.Unless the National Development Plan's roadmap to reduce unemployment in South Africa is successfully implemented and yields sufficient results, unemployment in general and graduate unemployment, as a subset thereof, will exhibit the same persistency as is the case at the moment.The broader problem of graduate unemployment can only be solved in the long run if the overall problems in South Africa's schooling and educational system receive the attention they deserve.This will require the vision and will of dedicated leadership on a political and social level.Only then will the mass of unemployed graduates be replaced with a workforce that productively contributes to the South African society everyone desires. | 8,569.4 | 2015-04-16T00:00:00.000 | [
"Business",
"Economics"
] |
On Numerical Analysis of Carreau–Yasuda Nanofluid Flow over a Non-Linearly Stretching Sheet under Viscous Dissipation and Chemical Reaction Effects
: This work reports the Carreau–Yasuda nanofluid flow over a non-linearly stretching sheet viscous dissipation and chemical reaction effects. The coupled system of non-linear partial differential equations are changed into a system of linear differential equations employing similarity equations. The spectral quasi-linearization method was used to solve the linear differential equations numerically. Error norms were used to authenticate the accuracy and convergence of the numerical method. The effects of some thermophysical parameters of interest in this current study on the non-dimensional fluid velocity, concentration and temperature, the skin friction, local Nusselt and Sherwood numbers are presented graphically. Tables were used to depict the effects of selected parameters on the skin friction and the Nusselt number. an increase in wall temperature gradient. Also, an increase in thermal radiation parameter leads to an increased heat transfer rate. It is shown that the heat transfer rate is increased by increasing
Introduction
Researchers have recently shown a lot of interest towards the study of non-Newtonian fluids due to their widespread applications in areas such as mining industries, where the handling of slurries and mud is done, in biology and medicine. Unlike Newtonian fluids, non-Newtonian fluids do exist abundantly in nature. Some typical examples of non-Newtonian fluids that occur commonly include chyme, blood, oil, paints and lubricants. Non-Newtonian fluids cannot be modelled using the classical Navier-Stokes equations due to the non-linear relationship between the shear stress and rate of strain at a given temperature. It is also worth mentioning that non-Newtonian fluids are more complicated to handle than Newtonian fluids, Kahshan et al. [1].
Detailed explanation and classification of non-Newtonian fluids can be found in Cioranescu et al. [2]. After viewing the demand of the required progression and viewing the nature, researchers have proposed several non-Newtonian fluid models. Shear-thinning is one of the main behavior that is exhibited by most non-Newtonian fluids. To that end, the influence of shear-thinning behavior is worth to be investigated. This class of fluids have many applications in polymer processing and multiple emulsions. In an endeavor to anticipate the rheological features of thinning fluids several models such as Ellis model, Kheyfets and Viscous dissipation or Joule heating refers to the generation of heat in a conductor due to the passage of an electric current. The process has quite several applications in industry and engineering. A typical example where Joule heating has found relevance is in electrical gadgets such as in light bulbs, electric stoves, electric heaters and electric fuses. Khan et al. [29] studied Joule heating in an MHD Carreau nanofluid flowing through an inclined stretching cylinder that is convective heated. Cheng and Wu [30] numerically studied viscous dissipation effects on the onset of instability for longitudinal vertices in the thermal entrance region of a horizontal parallel plate channel. Boubaker et al. [31] studied viscous dissipation effects on the thermal boundary layer of Pseudoplastic power-law non-Newtonian fluids using discretization method and the Boubaker polynomials expansion of schemes. Recently, Lund et al. [32] analyzed the effects of viscous dissipation in heat transfer of MHD flow of micropolar fluid partial slip conditions.
The main objective of this current study is to investigate the magnetohydrodynamic flow of the Carreau-Yasuda nanofluid over a non-linear stretching sheet subjected to slip, convective and zero normal flux conditions with viscous dissipation, thermal radiation and chemical reaction. As best known by the authors of this work, there is no study that has been reported on this kind of physical phenomenon for the Carreau-Yasuda model. Motivated by the above mentioned studies, the current study seeks to fill the existing gaps in this field of research. Using well-chosen similarity transformations, the system of the defining partial differential equations is transformed into a system of ordinary differential equations. The transformed differential equations and the associated boundary conditions are solved numerically using the spectral quasi-linearization method (SQLM) which was introduced by Motsa et al. [33]. The SQLM has been successfully used by researchers to solve coupled systems of non-linear differential problems. Alharbey et al. [34] used the SQLM to investigate micropolar fluid over a horizontal plate embedded in a non-Darcy porous medium. Pal et al. [35] successfully used the SQLM to analyze entropy generation of a steady laminar two-dimensional convective magnetohydrodynamic Jeffrey nanofluid flow over a linear stretching sheet. The spectral quasi-linearization method was used by Das et al. [36] to study the Casson fluid with homogeneous heterogeneous reaction with non-linear thermal radiation over an exponential stretching sheet. In all these studies, the SQLM was found to be accurate, computationally efficient and reliable. It is due to these attributes of the numerical method that encouraged the authors of this work to consider using the SQLM.
Mathematical Formulation
Considered in this present analysis, is a two-dimensional, steady MHD laminar incompressible flow of a Carreau-Yasuda nanofluid flow over a stretching sheet under slip and convective boundary conditions. As shown in Figure 1, the flow is taken on y > 0, y is vertical to the stretching plate and x is along the plate. A transverse magnetic field is applied to the surface along the y axis. The non-Newtonian viscosity of a Carreau-Yasuda fluid is defined by Khan et al. [15].
where the Rivlin-Ericksen tensor A 1 = [∇ · V + ∇ · V T ] and the shear strainγ is the shear strain defined aṡ Assuming a zero value for infinite shear rate viscosity that is µ ∞ = 0, Equation (1) becomes In this current the velocity profile is given by V = [u(x, y), v(x, y), 0], the temperature distribution by T = T(x, y) and the nanoparticle concentration by C = C(x, y). The continuity, momentum, energy and concentration equations are given by: ∂T ∂y ∂C ∂y where τ = (ρc)p (ρc) f is the ratio of the actual heat capability of nanoparticle to the base fluid.
x y β 0 The bottom of the stretching sheet is excited by convection at temperature T f which generates the heat transmission h f and κ is the thermal conductivity.
An expression for the radiative heat flux q r is obtained using the Rosseland diffusion approximation, which says Linear Taylor series approximation of T 4 about T ∞ gives Substituting Equation (9) into Equation (10) we get ∂T ∂y ∂C ∂y
Similarity Transformations
Following Khan et al. [15], we introduce the following dimensionless variables for the velocity components, temperature, nanoparticle absorption and the independent: If we substitute the similarity variables and their derivatives into the governing equations results in highly non-linear and coupled ordinary differently equations: 1 where We = cΓ c(n+1) The corresponding boundary conditions become: With S = g c(n+1) is the thermal slip parameter. The skin friction coefficient, heat transfer rate and the mass transfer rate which are expressed as follows where τ w is the shear stress, q w is the heat flux and q m is the mass flux. These quantities are expressed as: Substituting these into the above expressions we have: where Re is the local Reynolds number.
Method of Solution
The spectral quasi-linearization method is used to seek a numerical solution of the coupled system of Equations (14)-(16), subject to prescribed boundary conditions. The SQLM is basically a combination of two methods: (i) Quasi-linearization method (QLM) and (ii) Chebyshev spectral collocation method (CSCM), Shateyi and Muzara [37]. The QLM, originally developed by Bellman and Kalaba [38], is a Newton-Raphson-based quasi-linearization technique used to linearize the system of non-linear differential equations into an iterative sequence of linear differential equations. Applying the quasi-linearization formula to the system of Equations (14)- (16) gives: subject to the boundary conditions The variable coefficients have the following definitions: The sequence of linear differential Equations (22)-(24) is solved iteratively using CSCM. The linear transformation η = 2 L ∞ (1 + ζ) is used to migrate from the physical domain [0, L ∞ ] to the computational domain [−1, 1] where the CSCM can be implemented. With the CSCM, we approximate the unknown functions f m+1 , θ m+1 and φ s+1 by Chebyshev interpolating polynomials.
The derivatives of the approximating functions at Gauss-Lobatto collocation points gives gives where D = L ∞ 2 D, D in an (N + 1) × (N + 1) Chebyshev differentiation matrix as defined by Trefethen [39], Similarly, expressions for the nth derivatives of θ m+1 and φ m+1 are given by respectively. Evaluating Equations (22)-(24) at collocation points and substituting derivatives by Equations (26) and (27) yields where The identity matrix I is of order (N + 1) × (N + 1). When the spectral collocation method is applied is applied to the boundary conditions (25) yields
Results and Discussions
Discussed in this section are the numerical results of a coupled system of non-linear partial differential equations that model the Carreau-Yasuda nanofluid flow over a non-linearly stretching sheet in the presence of viscous dissipation and chemical reaction using SQLM. The linear system of Equations (28)-(30) together with boundary conditions (Equation (31)) is simplified using MATLAB 2016 (OPTI-NUM Solutions (Pty) Ltd, Johannesburg, South Africa). The influence of some physical parameters on the dimensionless velocity, temperature and concentration, the skin friction coefficient, the heat transfer rate and the mass transfer rate is shown by graphical representation. Tables are presented to portray the effects of some selected parameters on skin friction coefficient and the local Nusselt number.
Using N = 60 collocation points in all the computations, solution-based errors were used to test for convergence of the numerical method. A solution-based error is the difference between two successive solutions, i.e., the solution at iteration level m and m + 1. A plot of the infinity norms of the solution-based errors against the number of iterations is shown in Figure 2. A decrease of the error infinity norm with the number of iterations confirms the convergence of the method. The method converges after six iterations. Figure 3 is a plot of the residual error norms against the increasing number of iterations. It can be seen that there is a sharp decrease in the error as the number of iterations increases. This proves the high accuracy of the current method.
In all the numerical computations done in this work, the default parameters considered, unless otherwise stated, as in Khan et al. [15], are M = 0.3, n = 1.0, We = 3.0, Figure 4 shows the influence of the magnetic field parameter M on the nanofluid velocity. It is observed that when magnetic field parameter is enhanced, there is a decrease in the velocity distribution of the nanofluid. This is attributed to a Lorentz force that arise from the presence of a magnetic field that opposes the motion of the fluid and tends to resist the velocity, hence slowing down the fluid. The effect of the velocity slip parameter S on the fluid velocity is depicted in Figure 5. When the values of the slip velocity parameters are increased, the slip velocity increases and the fluid velocity decreases. Figure 6 reveals that enhancing the thermal slip parameter slightly increases the velocity profiles. The Eckert number relates the kinetic energy of the flow and the boundary layer enthalpy difference. As a result of viscous dissipation due to increased positive values of Ec, there will be an increased fluid initial temperature hence more thermal buoyancy effects that will induce more fluid flow. An increase in the values of Ec results in an increase in the fluid velocity as shown in Figure 7. Figure 8 shows that increasing the thermal buoyancy parameter λ 1 increases the velocity of the nanofluid. Physically, increasing the thermal buoyancy creates a temperature difference in the flow that causes a reduction in the boundary layer and enhances the fluid velocity.
The effects of some thermophysical parameters namely the magnetic field parameter M, the Eckert number Ec, the thermal slip parameter δ, the Prandtl number Pr, the thermal radiation parameter R, and the thermal buoyancy parameter λ 1 on the temperature profiles of the nanofluid are portrayed in Figures 9-14. Figure 9 shows that increasing the magnetic field parameter increases the temperature of the nanofluid. Increasing the magnetic field parameter tends to reduce the fluid velocity profiles in the boundary and this will cause the thickening of the thermal boundary layer. The influence of Ec on the temperature profiles is displayed in Figure 10. It is observed that increasing Ec increases the temperature profiles. This is because of increased viscous dissipation due to internal friction. Figure 11 shows that increasing the thermal slip parameter has the effect of increasing the fluid temperature. The Prandtl number has an opposite effect to that of δ as shown in Figure 12. Increasing the values of Pr is equivalent to a reducing thermal conductivity and hence a reduced thermal boundary layer. Figure 13 shows that increasing the thermal radiation parameter results in the increase in the temperature of the nanofluid. This due to the fact by increasing the radiation more heat energy is released to the flow thus enhancing temperature profiles. Increasing λ 1 depresses the temperature profiles as depicted in Figure 14.
The influence of the chemical reaction parameter γ, the Lewis number Le, N t , N b and Ec on the fluid concentration is shown in Figures 15-18. As shown in Figure 15, the concentration profiles are decreased with an increase in the chemical reaction parameter. An increased chemical reaction causes the concentration boundary layer to thin due to a reduced chemical molecular diffusion hence a reduced mass diffusivity. The effect of the Lewis number Le on the concentration of the nanofluid is reported in Figure 16. The Lewis number can be defined as the ratio of thermal diffusion rate to the species diffusion. For values Le < 1, mass diffusion is rapid so the species boundary layer becomes thin. It is clear that increasing Le causes an increase in the concentration profiles. Figures 17 and 18 show that an increment in the Nt and Ec dwindles the concentration of the fluid. Figure 19 reveals that raising Nb results in the concentration distribution and the identical boundary layer being increased. Figure 21. This is evident because higher values of Pr means the thickness of the thermal boundary layer decreases which in turn will result in an increase in wall temperature gradient. Also, an increase in thermal radiation parameter leads to an increased heat transfer rate. It is shown that the heat transfer rate is increased by increasing R and Pr. Figure 22 reports that increasing the magnetic field parameter M through the thermophoresis parameter N t (= 0.1, 0.3, 0.5, 0.7) enhances the local Sherwood number. Tables 1 and 2 depict the effects of some selected parameters on the skin friction coefficient and heat transfer rate, respectively. From Table 1, we can infer that the skin friction coefficient is significantly and slightly increased by increasing the magnetic field parameter M and the Eckert number Ec, respectively. The table also shows that the slip velocity S heavily depress the skin friction coefficient. Increasing the thermal buoyancy parameter slightly decreases the skin friction coefficient. Table 2 for different values of selected parameters when n = 1.0, We = 3.0,
Conclusions
This paper considers the Carreau-Yasuda nanofluid flow over a non-linear stretching sheet. The effects of viscous dissipation and chemical reaction were taken into account. The system of coupled non-linear partial differential equation describing the model were transformed into a coupled system of linear differential equations using a suitable similarity transformation. The resulting linear differential equations were numerically solved using the spectral quasi-linearization method. The convergence and accuracy of the SQLM was determined by the error infinity norms and the residual error infinity norms, respectively. Key findings from this study are the following: 1. The spectral quasi-linearization method is a very efficient and reliable method for solving non-linear differential equations. 2. The fluid velocity and the momentum boundary layer increase with increasing Eckert number, thermal buoyancy parameter, thermal slip parameter, and decrease with the magnetic field parameter, and the velocity slip parameter. 3. The fluid temperature increases with increasing magnetic field parameter, Eckert number, thermal slip parameter, and the thermal radiation parameter, and is decreased by raising the Prandtl number and thermal buoyancy parameter. 4. The fluid concentration in enhanced by increasing the Lewis number and the Brownian motion parameter while it is depressed by increased thermal reaction parameter, thermophoresis parameter and the Eckert number. 5. The skin friction is increases with increasing magnetic field parameter and decreases when the Eckert number, thermal buoyancy parameter and the velocity slip parameter are increased. 6. The local Nusselt number increases when the Prandtl number and the thermal slip parameter are increased, while the opposite trend is observed when thermal radiation parameter and thermophoresis parameter are increased. Funding: This research was funded by the University of Venda.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript: | 4,006.8 | 2020-07-14T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Automated Code Refactoring upon Database-Schema Changes in Web Applications
Modern web applications manipulate a large amount of user data and undergo frequent data-schema changes. These changes bring up a unique refactoring task: updating application code to be consistent with data schema. Previous study and our own investigation show that this type of refactoring is error-prone and time-consuming for developers. This paper presents EvolutionSaver, a static code analysis and transformation tool that automates schema-related code refactoring and consistency checking. EvolutionSaver is implemented as an IDE plugin that works for both Rails and Django applications. The source code of EvolutionSaver is available on Github [1] and the plugin can be downloaded from Visual Studio Marketplace [2], with its tutorial available at https://www.youtube.com/watch?v=qBiMkLFIjbE and DOI 10.5281/zenodo.5276127.
I. INTRODUCTION
Modern web applications often use database engines to manage a large amount of user data, such as user profiles in social network applications and transaction records in on-line shopping platforms [3]. The schema of such data goes through changes, such as table renaming, column deletion, and others, for better performance or functionality when an application evolves [4]. Unfortunately, it is difficult for developers to keep their code consistent with database schema changes all the time, a task we refer to as schema-related code refactoring, with any inconsistency leading to application crashes.
Schema-related refactoring and traditional refactoring like class renaming share similarities, given that popular Object Relational Mapping (ORM) frameworks, such as Rails [5], Django [6], and Hibernate [7], allow database data to be updated and retrieved in an object-oriented way-the name of a database table corresponds to the name of a model class and the names of table columns are the same as class fields.
However, they also differ in various aspects, due to the unique nature of persistent data, as we elaborate below 1 .
How is schema defined? Different from a regular class whose field names and field types are defined by its class declaration, a model class's structure has to match its corresponding database table that is created once at an application's installation or upgrade. In fact, in some ORM frameworks like Rails, persistent fields of a model class are not declared in its class definition and are instead automatically mapped by Rails from the corresponding table schema, which is defined through ORM APIs like create_table in Rails or CreateModel in Django in a type of files called migration files, as shown in Listing 1. How is schema changed? Schema changes are expressed through ORM APIs in migration files (e.g., line 4 in Listing 1 renames the column sequence in table people), which informs the web application about how to update its database during installation and upgrade. In an ORM framework like Rails, schema changes cannot be seen in class definitions.
What code refactoring is needed? Following a schema change in the migration file, corresponding references in the application need to change. Some of these are just class or field renaming like in line 2 of Listing 2, while some require changing ORM APIs' parameters like in line 1 of Listing 2.
For example, developers of Onebody [8], a popular social network application, used a table people to keep user information. In one commit, they renamed the sequence column in people to position (line 4 in Listing 1). In the same commit, they correctly updated the reference to sequence in 6 places across 4 files like the one in line 2 of Listing 2, but forgot to change the other 5 places, such as the parameter reference.
Recent work motivated tool support for schema-related refactoring [4] and proposed techniques to synthesize updates to a list of SQL queries given the old schema and the new schema written in SQL [9]. Although inspiring, it does not directly help many web applications, whose schema changes and database operations are expressed in ORM APIs, rarely if ever in raw SQL. This paper presents EvolutionSaver, a tool that uses ORMaware static analysis to help schema-related code refactoring in web applications written in Rails [5] and Django [6], two popular web frameworks. Given two versions of a web application, EvolutionSaver analyzes and identifies schema changes from migration files, searches for any code inconsistent with the new schema, and generates warnings and patches accordingly.
To ease its adoption, we have integrated EvolutionSaver into the popular Visual Studio Code IDE [10] as a plugin. Web developers can use this plugin to guide their schema-related refactoring or to look for schema-code inconsistency bugs.
In our evaluation with 12 popular Rails and Django applications, EvolutionSaver detected 86 schema-code inconsistencies caused by 35 schema changes in the past. We have reported 11 of them that exist in the latest versions to developers, and got 10 of them already confirmed and 6 of them already patched based on our suggestion. Our examination of the rest 75 inconsistencies shows that they took many days for developers to discover and fix.
EvolutionSaver's source code is on Github [1] and the plugin can be downloaded from Visual Studio Marketplace [2].
Background.
A web application's schema gets initialized and updated by migration APIs in migration files, a mechanism supported by ORM frameworks. For example, Listing 1 illustrates two migration files, each with one migration API call: the first creates a table named people with two columns id and sequence, which are automatically mapped to two fields in a corresponding model class with a singular name (Person class); the second renames a table column, which automatically causes a field name change in its model class.
During an installation/upgrade of a web application, the ORM framework executes all the latest migration files not yet executed on this installation, calling migration APIs in these files one by one and updating the schema along the way.
Extended motivation. A recent study [4] on 100 Rails applications showed that most applications went through many schema changes. For further motivation, we studied 12 Rails and Django web applications from different categories like forum, e-commerce, social network, etc. 2 They are all highly rated, each with more than 1000 stars on GitHub, 11,000-900,000 lines of code and 500-100,000 commits.
How often are schema changes? 18% -85% of application versions contain at least one schema change, and there are more than 8 changes for every version on average. Furthermore, changes are common throughout the development history of each application. For example, across the 6 Rails applications, the most recent 25% of commits happen to contain about 25% of all the schema changes in total.
What types of changes are there? As shown in Table I, changes to various aspects of the schema are all common. About three quarters of changes add tables, columns, or indices, and do not directly cause inconsistency with existing code. The remaining one quarter of changes modify or delete existing tables, columns, associations, or indices, immediately threatening code consistency, and hence are the target of EvolutionSaver, as detailed in Table II. 2 The application list is in our code repository. III. APPROACH This section discusses how EvolutionSaver conducts inconsistency checking and refactoring step by step, based on the source code of two versions of an application. Unless specially explained, the code analysis is based on AST trees generated by Yard [11] for Rails and pyast [12] for Django programs.
A. Schema change extraction
Different from previous work [9] that identifies schema changes by comparing the old and the new schemas written in SQL, EvolutionSaver takes the unique opportunity offered by web applications and identifies schema changes directly from all the new migration files in the new version.
Specifically, 12 out of 19 Rails migration APIs and 6 out of 17 Django migration APIs introduce schema changes that can immediately cause code inconsistency. EvolutionSaver matches each such API, or an API-parameter combination in case of Django, with one change type listed in Table II. Whenever such an API call is identified in a migration file, EvolutionSaver extracts related change information, like table and column names, and saves it as a change record for later use. EvolutionSaver aggregates related changes to the same target: deleting a column and then adding it back to the same table will be aggregated and correctly considered as no change.
Finally, in Rails, since an association relationship is defined partly through model classes, EvolutionSaver compares model class definitions, in addition to migration files, to get association changes. For example, the model class definitions in Listing 3 uses has_many to indicate that each User record is related to multiple Comment records, which can be retrieved through the association field comments defined in User.
B. Query extraction
Next, EvolutionSaver identifies all the queries that can be issued by the new version of the application. In ORM, a query can be expressed in two forms: 1) an ORM query API such as find_by in Rails and filter in Django invoked upon either an object holding a previous query's result or a model class like that in line 1 of Listing 4; 2) the reference to a model class' association field. For example, following the association definition in Listing 3, @user.comments in Listing 4 issues a query to select records from table comments as 'select * from comments where user id = user.id'.
EvolutionSaver identifies both forms of queries and extracts the names of table, column, index, and related association information from each query. The analysis for Django applications is done by analyzing the AST generated by pyast. The analysis for Rails applications is built upon ORM-aware static analysis framework PowerStation [13]. EvolutionSaver uses intra-procedural dependency analysis to identify objects that hold results from a previous query. In theory, it may miss queries on objects defined in a different procedure.
C. Inconsistency detection and refactoring suggestion
Finally, EvolutionSaver goes through each schema change, searches for inconsistency with queries in the new version, and generates refactoring suggestion accordingly (Table II).
Name changes. For the renaming of a table, EvolutionSaver checks if the old name T old is used in queries from the new version 3 ; for the renaming of a column (e.g., Line 4 in Listing 1) or an association field (e.g., Listing 3) C old in a table T, EvolutionSaver checks if any new version's query refers to C old in T. Any such inconsistency is reported and EvolutionSaver suggests the refactoring to replace the old name with the new name in corresponding queries.
Deletion. When a table T old is deleted, EvolutionSaver checks if T old is still used in any query. If so, the query is reported as an error, with all the statements in the same procedure that have control or data dependency on it highlighted. Since a table deletion is typically followed by major functionality changes, no refactoring attempt is made here.
When a column or association C in a table T is deleted, Evo-lutionSaver identifies any query that refers to C in T as an error. In its refactoring attempt, EvolutionSaver removes C from the query and runs the parser to see if the resulting query is valid. If valid, a refactoring suggestion is made; otherwise, Evolu-tionSaver reports this error without refactoring suggestion. For example, if column name is deleted from table users, Evolu-tionSaver would suggest changing User.find_by(name: ?, id: ?) to User.find_by(id: ?).
When an index for column C is deleted, EvolutionSaver identifies any query that conducts filtering on C and issues a warning that the query may become slower.
Type changes. When a has_one association is changed to has_many, the corresponding association field usually gets renamed (from singular to plural), and the results of a query that refers to this association field change from one record to an array of records (vice versa). EvolutionSaver conducts the association name-change refactoring and warns users that related queries' return type has changed, which requires further refactoring. For column type changes, users are warned of any query that refers to a column whose type has changed.
A. Features of EvolutionSaver IDE plugin
We have implemented EvolutionSaver as a plugin for Visual Studio Code [14], a popular IDE for multiple languages. As shown in Figure 1, one can press the start button to start the plugin. By default, EvolutionSaver compares the current code with the latest commit. Users can also specify the commits they desire to check in the configuration file.
Issue list. The left panel, as shown in Figure 1, lists all the errors detected by EvolutionSaver in a hierarchical view. The first level lists the files where errors are detected; clicking a file shows the details of every error in that file, including the line number and the type of root-cause schema change; clicking the error shows a "Fix" button.
Issue detail. Clicking a file in the issue list will navigate users to the corresponding file in the editor, with the error code highlighted. Users can hover their mouses over the highlighted code and see the detailed explanation, like "Group-Time.ordering is RENAMED TO sequence" in Figure 1.
Issues fix. One can click the 'Fix' button on the left panel or a 'Fix all' button to fix one or all the issues.
B. Implementation
The start button triggers our static analyzer to run on the given code commits. The analyzer produces an output.json file that is parsed to create the issue list using the Visual Studio Code Extension APIs TreeDataProvider and TreeItem. The DocumentHighlightProvider API is used to highlight the selected error code, given the filename and line number information in output.json. The HoverProvider API enables the tooltip of detailed reason to display once hovering over the highlighted code. To fix the error, TextDocument, Range, and ExtensionContext APIs are used to insert, replace, and delete source code in the editor panel.
V. EVALUATION AND THREATS TO VALIDITY
We have evaluated EvolutionSaver using 6 Ruby-on-Rails applications and 6 Django applications (the same 12 applications described in Section II). For each application, we apply EvolutionSaver on all its consecutive commits.
Detection. As shown in Table III. EvolutionSaver automatically identifies 86 inconsistency errors caused by 35 schema changes, with no false positives based on our manual examination. Among them, 11 errors exist in the latest versions.
After reporting them to developers, 10 already got confirmed. These 11 errors have existed for 234 days on average (median: 61 days) when reported by us. The other 75 errors on average existed in these applications for 232 days (median: 7 days) and 409 commits (median: 16 commits) until finally discovered and fixed by developers. In theory, some developers may intentionally split schema changes and follow-up code changes to separate commits. This is unlikely for most of these errors given the long gap taken to fix them. Moreover, about half of them were not fixed until after major code releases.
Refactoring. Among the 11 errors that we reported to developers, for 6 of them, developers already accepted the refactoring patches suggested by EvolutionSaver and merged them into the main branch. For the other 75 errors that were fixed by developers in the past, the related statements, functions, or files were often deleted in the fixed commit. There are 13 of them where related code regions still existed in the fixed commit, and these 13 fixes are exactly the same as the refactoring suggested by EvolutionSaver.
Performance. EvolutionSaver takes 3-125 seconds (35s on average) to process consecutive commits of Rails applications with 11,000-900,000 lines of code, and 1-40 seconds (19s on average) for Django apps with 17,000-174,000 lines of code.
Threats to validity. As discussed in Section III, Evolu-tionSaver may raise false alarms in rare cases. There are also sources of false negatives. Application code that cannot be parsed by pyast or Yard cannot be analyzed by Evolu-tionSaver. A schema may be changed by SQL commands issued directly to the database without any record in migration files. This is considered a bad practice [15] and is not handled by EvolutionSaver. A schema may also be changed in migration files through raw SQL commands wrapped in ORM APIs like migrations.RunSQL(...) in Django and migrations.execute(...) in Rails. This feature is rarely used by web developers (less than 1% of cases in our study), and is not handled by EvolutionSaver. If the new version adds a table T, and then changes the schema about T or its columns, indices, or associations, EvolutionSaver would not check whether the new code is consistent with the schema of T, as T does not exist in the old version. Finally, what we observed in the 12 Rails and Django applications may not apply to other open-source applications.
VI. RELATED WORK Recent work uses ORM-aware static analysis to detect performance anti-patterns [13], [16] and data constraint problems [17] in database-backed web applications. They did not look at schema changes and are orthogonal to our work. EvolutionSaver is motivated by recent work [4], [9] about schema changes in web applications, but is different from them as discussed earlier. Specifically, MIGRATOR [9] analyzes schema changes in SQL and synthesizes SQL queries, while EvolutionSaver looks at Rails (Ruby) and Django (Python) application; MIGRATOR handles renaming changes and structure changes like moving a column from one table to another, while EvolutionSaver handles all the changes in Table II. VII. CONCLUSION EvolutionSaver is a static analysis tool that detects schemacode inconsistency and suggests refactoring in web applications. Evaluation shows that EvolutionSaver is effective in analyzing large open-source Rails and Django applications. | 4,034.8 | 2021-11-01T00:00:00.000 | [
"Computer Science"
] |
Fault Handling in Industry 4.0: Definition, Process and Applications
The increase of productivity and decrease of production loss is an important goal for modern industry to stay economically competitive. For that, efficient fault management and quick amendment of faults in production lines are needed. The prioritization of faults accelerates the fault amendment process but depends on preceding fault detection and classification. Data-driven methods can support fault management. The increasing usage of sensors to monitor machine health status in production lines leads to large amounts of data and high complexity. Machine Learning methods exploit this data to support fault management. This paper reviews literature that presents methods for several steps of fault management and provides an overview of requirements for fault handling and methods for fault detection, fault classification, and fault prioritization, as well as their prerequisites. The paper shows that fault prioritization lacks research about available learning methods and underlines that expert opinions are needed.
Introduction
For the manufacturing industry, a primary aim is to increase the productivity and quality alongside the reduction of unplanned downtimes of machines in production lines to be able for economic competition [1,2]. Machine downtime can be reduced by implementing predictive maintenance methods that will lead to operators taking care of machines that will soon fall into a fault mode. Still, machine faults occur, which could result in the necessity of replacing parts of the equipment. This may also lead to accidents and system failures that will cost millions in lost production, or pollution [3]. Therefore, proper fault handling is needed, as faults significantly impact reducing downtime and manufacturing costs. This process depends heavily on how early a fault is detected and identified after the occurrence, as more possibilities to act will exist [4]. In addition, there is also an impact on the meantime to repair (MTTR), which describes the average time until the machine has been repaired by operating personnel [5]. Fault detection includes the correct determination of the faults' nature, their impact, and location in the manufacturing process, based on data produced by the manufacturing system [6,7]. The identified faults can be classified and then prioritized to accelerate the repair actions by the personnel. Therefore, supporting methods for the fault handling steps are needed [8]. Those depend on collected data that gives information about the machine's status.
This data can be produced by sensors that are ubiquitous in modern manufacturing sites. Here, we focus on cyber-physical systems (CPS) in Industry 4.0, which is also known as cyber-physical production systems (CPPS) [9]. A CPS consists of both cyber-elements, e.g., software-modules and physical-components, e.g., sensors and actuators [6,10] and their interaction [11,12]. Regarding the distribution of the term CPPS in research, we also use the broader term of a CPS focusing on production systems. Complexity in fault detection rises in such an environment and, in addition to that, the probability of fault. In such complex systems, the probability of faults is higher [6,13], which leads to a more complicated and computationally intensive fault detection [12]. Therefore, the need for automated and scaleable fault handling methods gets even higher. In this paper, a fault will be defined as a machine state in which a process does not perform as required or needed (e.g., [12,14,15]). This paper will review methods used for fault handling in manufacturing processes and will focus on their use in cyber-physical systems. To do so the concept fault handling in CPSs is introduced and requirements needed for fault handling methods are listed in Section 2. Several methods for the single steps of fault handling, namely fault detection (Section 3.1), fault classification (Section 3.2) and fault prioritization (Section 3.3), will be introduced shortly and a listing of literature references for each step will be given in Section 3. The results of the literature review will be discussed in Section 4. A conclusion and a prospect of future research areas will be given in Section 5.
Requirements for Effective Fault Handling
To perform effective fault handling in an industrial production environment, prerequisites are required to enable methods to perform well. First, there are the conditions that need to be fulfilled by the production site. Most fault diagnosis methods depend on the machines' historical data within production lines. These methods are data-driven and can only perform well if a necessary amount of data is available [16]. Therefore, they need to be scalable to handle the huge amount of data. As modern production sites implement Internet of Things (IoT) devices [17] and several sensors to monitor machine condition, the data amount can also reach a high complexity. Sensors monitor various variables, based on both physics, like vibration, temperature, pressure, and based on processes of a manufacturing system, like process deviations, control settings, and machine specifications [1,17]. The complexity in data can be handled, e.g., via dimensionality reduction methods. Complexity is not only caused by the number of used sensors but also by the general structure of the machines in production lines and how these with their respective sensors are connected [5]. Additionally, the collected data is temporal. Sensor data is often gathered as time-series data, which also needs to be handled by methods that are used to produce insights about the machine's status [18]. The temporal aspect is also relevant in industries that need to implement real-time fault detection and diagnosis as they can only use methods that are able to process data quickly and give results in real-time. Another general requirement for fault handling methods is the ability to train models on imbalanced data as in production, the amount of data that represents normal condition is more often seen as data that indicates faulty machine states [19,20].
In addition, data quantity and intrinsic complexity are a requirement for fault handling methods. CPSs produce a huge amount of data as they include many sensors and devices for processing and communication tasks, which connect physical elements like machines with cyber-physical data-processing units [6,11]. In addition, they consist of several subsystems, which leads to a higher complexity of the production line itself and its data relations [12,14]. Moreover, the higher complexity makes CPSs more prone to faults [13]. Therefore, fault handling supported by automated methods gets more important. The complexity and high dimensionality also result in computationally expensive feature extraction, which emphasizes the need for effective dimensionality reduction. Another reason for the need of automated fault handling in CPS is that not only the machines can experience fault states, but also the sensors themselves can generate faulty data, so sensor faults need to be handled as well.
As this paper reviews methods for different steps of fault handling, requirements for those steps need to be explicitly outlined. First, fault detection, which is the entry point of fault handling, works with the raw data of sensors. Therefore, the used methods need to be able to work with the given raw data, or feature extraction methods need to be implemented as well. Which method will be applied also depends on the raw data format that is available in the monitored system. Methods used for fault classification need to discriminate several groups of faults. A prerequisite to achieving this goal is that fault types are already defined, and historical data is labeled to train performant classifiers. In case labeling data is not feasible, then unsupervised methods are used. Therefore, experts need to be at hand to describe the fault groups former identified by the algorithm [1]. The classification and detection of faults with their respective types in the data are essential for the next step-fault prioritization. During the step of prioritization, the opinions of experts are needed [21,22]. They need to consider various faults that can occur in the monitored system. They also need to consider that machines do not work separately but depend on each other. So, in addition to the single machines' criticality, experts need to take the chain of faults into account and how machines influence other processes in the production line [5]. Domain knowledge of experts is also important. Production lines and the employed CPSs have a high level of individuality with many configuration possibilities [23]. So copying methods from one production line to another is no feasible option without substantial manual effort.
Methods
This section will show which methods are used in the literature for the different fault handling steps. The fault handling process, as shown in Figure 1, consists of the data collection, data pre-processing, and feature selection before training any models. The step of fault amendment is also designated after the decision has been made of which fault to tackle first. The methods presented in this paper are used for fault detection, fault classification, and fault prioritization steps. We focus on the mid-tier process (fault detection, fault classification, fault prioritization) because the data structure of each employed CPS is unique, and a generalization cannot be given for all available methods in the context of this work. The overall fault handling process begins with the data collection, including pre-processing of the data and feature handling before training models for fault detection and fault classification. Afterwards, fault prioritization occurs; after that, all found faults will be handled manually by the operators' personnel or automatically during the fault amendment process. Due to the individuality nature of the steps or their maturity, not all research fields are covered by this survey. Investigated research fields are covered by grey boxes, whereas white boxes cover the latter.
In addition, we declare fault amendment as an important process step, which describes the correction process after the fault handling steps. First attempts for fault amendment exists, e.g., Diedrich, Balzereit et al. [24][25][26] investigate an automated reconfiguration of CPSs after a fault is detected. To the best of our knowledge, no other automated attempts are made in the field of fault amendment but prompt us to include this step separately in the process without deeper analysis, leaving a field open for future research.
Therefore, this review only considers papers whose described methods have been mapped to one sub-step of the above-mentioned fault handling process (fault detection, classification, and prioritization). Note that various types and architectures exist for each listed method, which may be used in the context of Industry 4.0-related fault diagnosis. For this reason, we only provide a selected excerpt, which should be seen as examples for the following methods.
Fault Detection
Fault detection is the process of finding an occurrence of a fault in a unit of the monitored process based on measurements that are provided by the system. Those faults lead to abnormal or system-critical behavior of the machine, reducing the performance of the whole system significantly [21,27,28]. Some referenced papers also include the identification of further fault characteristics like impact, location, or time of occurrence and the actions taken to avoid further damage in the process of fault detection [6,7,29], whereas others state that this is a specific part of fault isolation [28]. In this paper, we consider methods that detect whether a fault occurred or not, despite a possible subdivision into the fault categories [20]: abrupt (e.g., [30]), incipient (e.g., [31]), or intermittent (e.g., [32]). Methods of fault classification will handle any further identification.
Fault detection methods can be separated into three groups [14,20,33]: Data-driven models learn the systems behaviour by training and thus depend on enough available data [16]. Datadriven approaches use analytical models and historical data. The approaches do not depend on knowledge of the monitored process structures and are scalable regarding the number of sensors they draw data from [22]. As the amount of gathered sensor data is increasing and machine learning techniques have been developed rapidly, many researchers focus on datadriven methods for fault diagnosis problems [34]. Model-based methods require building a specific model that includes the architecture and process of the monitored systems, as well as correlation and relations between the various process variables [20]. Mining those relations can be computationally intensive in large-scale systems with complex structures, and requires a huge amount of sensors [22]. Knowledge-based methods for fault diagnosis rely on sets of rules that are formed by expert knowledge of the monitored system and the relations between several fault types [33]. The dependency on knowledge makes those models very system-specific, which makes updating more complicated [16].
We align our work with recent outstanding surveys [14,20,35] and set the perspective around our defined fault handling process. Due to the scalability to vast amounts of data, this paper will focus on data-driven methods. Table 1 gives an overview of the presented methods, including further references, which have not been presented in this paper due to brevity.
Method Details References
Neural Network Self-organizing map [36] ANN [37,38] Random Forest Classification Problem (normal, fault) [39] k-Nearest Neighbors (kNN) Ensemble method based on kNN with random forest k-means for feature selection [40] Naïve Bayes classifier Ensemble method based on Naïve Bayes classifier with random forest k-means for feature selection [40] Kernel PCA Training on only normal data points and using threshold for fault detection [19,41,42] TEDA (Typicality and Eccentricity Data Analytics) Unsupervised algorithm, no previous knowledge needed; detects outliers as faulty data samples [3,[43][44][45] Improved Support Vector Machines (SVM) OS-LSSVM uses a sparsity component to increase the prediction speed of sensor values; fault is detected in case of high residual error [46][47][48] 3.
Neural Networks
Neural networks can be used as supervised or unsupervised learning techniques for the problem of fault detection [36,37]. Artificial neural networks (ANNs) learn complex non-linear functions. They also learn the importance of input features so that no preceding feature extraction methods are needed to reduce the complexity. However, input features need to be normalized to ensure that features with a larger scale will not be privileged [37].
Heo and Lee [37] applied a supervised ANN method to solve fault detection formulated as a binary classification problem. The model consists of an input layer with one node for each feature, several hidden layers, and one softmax layer that calculates the output values. The output neuron with the highest assigned value defines the data set class, which is in case of fault detection either normal or faulty. The authors train ANNs with different hidden layers and nodes per hidden layer using the rectified linear unit (ReLU) as an activation function. They apply the classifier to the Tennessee Eastman process, which is used as a benchmark process with defined fault types in literature (e.g., [49][50][51]). Their results are compared to those of Yin et al. [52] and Zhang and Zhao [53] and achieve the best overall detection rate.
Von Birgelen et al. [36] train self-organizing maps (SOM), as introduced by Kohonen [54], for fault diagnosis. SOM learns the characteristics of the normal behavior of components in a CPS. That means no faulty data sets are needed for training, which makes this method suitable for imbalanced data. SOM is an unsupervised neural network architecture that organizes its neurons in a topological map. At the end of the training stage, each neuron represents a unit, encompassing a part of the training data. Live data is mapped to the best fitting unit to calculate the quantization error, the distance between actual data value, and the mapped unit. If the quantization error exceeds a threshold, the data set is considered faulty, and further fault diagnosis can be performed. The authors evaluate their approach via experiments on real-world systems, including industrial plants.
Random Forests
Random forests consist of uncorrelated decision trees trained independently and with a random choice of considered split features and training data sets. A random forest is a supervised method, as labeled data is needed to find splits [55]. Imbalanced data can be handled by weighted sampling methods or penalty on the misclassification of the minority class [56,57]. Due to the randomly selected split features and the independently trained trees, random forests are suitable for high dimensional data, and big data sets [58].
Yan and Zhou [39] use historical flight sensor data to detect and predict anomalies in aircraft components. They formulate the detection problem as a three-class classification problem with one class to represent normal state and two classes representing faults. A random forest is trained based on features that are extracted by using statistical analysis and correlation analysis. Their proposed method is evaluated in a case study on a component of an aircraft system.
k-Nearest Neighbors (kNN) and Naïve Bayes
Both k-Nearest Neighbors and Naïve Bayes can be used as ensemble classifiers for fault detection in which the classifier chooses the class that is represented most among the results of trained classifiers. kNN is a non-parametric classification algorithm that classifies a new observation to the majority class among its k-Nearest Neighbors observations. The method is sensitive to non-informative features, which likely occur in high-dimensional data. The ensemble method with a random selection of features counters this problem [59]. The Naïve Bayes classifier is a probabilistic classifier that assumes that all features are pairwise independent. A new observation will be assigned to the class with the highest calculated posterior probability [60]. In comparison to a classical Bayes approach, the ensemble method results in higher efficiency of Naïve Bayes with regard to high-dimensional data, because in the classification stage, all features are considered to calculate the posterior probability [61].
Fan et al. [40] use ensemble models based on k-Nearest Neighbors (kNN) and Naïve Bayes classifiers to classify wafers of semiconductor manufacturing as flawless or faulty. They train a random forest model to handle high-dimensional data to get the variable importance of all sensor variables. K-means cluster those, and the cluster variables with the highest average variable importance are used in the final fault detection step. Ensemble models based on kNN and Naïve Bayes were trained with data that included an equal amount of randomly selected faulty and normal data sets to prevent inaccuracy due to class imbalance. The models were compared by sensitivity and specificity, and the kNN ensemble method performed better than the Naïve Bayes method.
Kernel Principal Component Analysis
Kernel Principal Component Analysis (KPCA) is a self-supervised learning method that can be used to detect faulty observations. KPCA is an expansion of PCA where the data points are mapped into a higher-dimensional space by a kernel function. Then, PCA is performed in the higher-dimensional space [62]. During the training process, KPCA is performed on data samples that represent the normal state; hence, class imbalance in the case of fault detection is irrelevant. Consequently, KPCA succeeds on imbalanced and only partly labeled production line data. The maximum reconstruction error during the training stage defines the threshold used for fault detection in the test stage. For new data sets, the reconstruction error is calculated, and in case the threshold exceeds, the data set is considered faulty. As the kernel is an N by N matrix, with N being the number of observations, the time to generate the kernel matrix strongly increases if N is high. The KPCA method for fault detection is used by Wang et al. [19]. They evaluate their method in a case study based on experiments on a data set comprising industrial etching processes for fault detection. Yang, Chen, and Sun [41] meet the problem that KPCA cannot be used for real-time detection. This has been possible by reducing the training data set with an approximate basis that consists of a minimum of training samples but still represents the total training samples well.
Typicality and Eccentricity Data Analysis
Typicality and Eccentricity Data Analysis (TEDA) is an unsupervised method introduced by Angelov [44] that can be used for fault detection, as well as classification, clustering, and prediction problems [43,63]. The model uses a data analysis method, following the concepts of typicality and eccentricity. In this context, typicality is described as the spatial similarity of a data sample to all other data samples, whereas eccentricity states the difference of a data sample from all other data samples. The method operates without any assumptions about the data distribution and data independence, which is unlikely in real-world scenarios. In addition, TEDA is a recursive algorithm, which makes this method fast and suitable for big data and real-time applications, resulting in a low computational complexity. Typicality and eccentricity are recalculated with every new data sample, so a threshold needs to be defined that separates normal from outlier (faulty) data samples [35].
Bezerra et al. [3] and Costa et al. [43] use TEDA as a fully autonomous algorithm for fault detection in industrial processes. Several signals are used to detect data samples as faulty. With the TEDA method, no prior knowledge of the processes and data samples and no user-defined parameters are needed. The TEDA method was used as an unsupervised learning algorithm in Lou and Li [45] by selecting features via the Laplacian Score method before training to make a priori knowledge during pre-processing stage negligible.
3.1.6. Improved Support Vector Machines Improved Support Vector Machines (SVM) for fault detection, named online sparse least squares SVM (OS-LSSVM), is proposed by Deng et al. [46]. They use the method for the detection and prediction of sensor faults. The sensors produce time-series data, which is analyzed based on sliding windows. The approach is based on the LSSVM method introduced by Suykens et al. [64]. Additionally, a sparsity component is implemented, which states that all input vectors can be linearly represented by the base vector space so that all training samples can be replaced by the base vector set. Furthermore, the training data is acquired with the sliding time window method that only considers the latest data points. This increases the prediction speed so that the method can be used for the real-time prediction of faults. For evaluation, the proposed method is applied on a gyro sensor. The results show that the residual error is lower when using LSSVM without the proposed sparsity component, while the forecasting time decreases with the sparsity component.
Fault Classification
In this paper, the fault classification process includes the fault detection of various fault types by clustering analysis and the classification of detected faults into predefined fault classes. In both cases, data that indicates faults need to be analyzed by experts with deep knowledge of the monitored system. In the case of clustering, experts need to define which identified cluster represents which fault type (unsupervised learning). In the case of classification, experts need to define which fault types can occur in the monitored system and which historical data sets represent those fault types (supervised learning).
Fault classification and therefore the generalization of faults is a prerequisite for fault prioritization (Section 3.3) thus experts can focus on a limited number of identified fault types. An overview of the presented fault classification methods is given in Table 2.
Fault Clustering Methods
Fault clustering methods include k-means clustering, Gaussian-Mixture-Model clustering, and fuzzy-c-means clustering. Those methods are used by Amruthnath and Gupta [1,2] to identify clusters in vibration data of a rotating fan in different setups. By using unsupervised learning techniques, they address the challenges of supervised learning for early fault detection, such as the necessity of historical, labeled data and the incapability of classifying new faults that are not known at training time, which results in an extended training time and an inflexible model. On the contrary, unsupervised learning methods can be used for class structures, in which no knowledge of the original data is required. Both papers use PCA for dimensionality reduction and make the assumption that vibration is the only significant feature.
Gaussian-Mixture-Model is a non-parametric density estimation method that can identify several Gaussian distributions within a data set. Each of the different distributions represents a cluster. Amruthnath and Gupta [1] identified six clusters in their data with the Gaussian Mixture Model method in total, of which three represent redundant healthy states of the machine, as well as one for each of the following faulty states: operating failure, equipment failure and total shutdown of the machine. Experts defined the representations of the clusters. For the k-means method, the silhouette method identified an optimal number of two clusters. The clusters represent healthy and faulty states in general, but differentiation between operating failure, equipment failure, and total shut-off is impossible. In another research study, Amruthnath and Gupta [2] identified five clusters with the Gaussian-Mixture-Model method. Based on the elbow method, three clusters were identified with k-means, representing the healthy state, warning, and faulty state. Additionally, the fuzzy-c-means method was used. Fuzzy-c-means clustering is an extension of k-means clustering, improved by Bezdek [81]. The parameter c is comparable to the one k of k-means. The fuzzy aspect is given because all data sets belong to every cluster with a certain weight depending on its distance to the cluster's centroid. The fuzzy-c-means method achieves the same results as the k-means method if hierarchical clustering has been performed.
Neural Networks
Neural networks and their variants can be used for fault classification. Heo et al. [37] propose an artificial neural network (ANN) model for classifying data into 17 fault types for the Tennessee Eastman process. Normal and faulty data are used as training and test data. An ANN with three hidden layers using both ReLU activation function and a softmax layer with softmax function is compared to results of two references [49,82] and achieves the best overall classification rate for the selected fault types and normal state. The authors explain the better results with their network design so that the ANN performs fault detection and classification simultaneously.
Another ANN is the autoencoder. An autoencoder is one kind of unsupervised learning technique that identifies important features in the input data. The model encodes the input data into a lower-dimensional space and tries reconstruction through decoding. The difference between original input data and reconstructed output data gives the reconstruction error, which will be minimized by training the autoencoder [83]. Several methods encode the input into a lower-dimensional space.
Lv et al. [49] apply a stacked sparse autoencoder with a softmax classifier to the multi-class classification problem of several fault types and normal mode. A stacked sparse autoencoder is a neural network consisting of several sparse autoencoders. The sparsity penalty causes most of the hidden layer units not to get activated and thus focus on unique features that identify the classes of the training data [83]. The model is evaluated on data of the Tennessee Eastman Process experiment that got detected as faulty beforehand and achieved the best average fault classification rate compared to other state-of-the-art approaches, including sparse representation, random forest, SVM, and structural SVM.
Fang et al. [66] propose an autoencoder to detect 10 different fault types from data generated in a satellite power system. The neural network includes two hidden layers that use the de-noising autoencoder method. The input layer consists of 48 nodes for each input parameter. The output layer consists of 10 output nodes for each fault type to be identified. The proposed method is compared to the deep belief network and deep Boltzmann machine methods based on the same data set. The results indicate that the proposed deep neural network method performs best.
Another neural network variant is the convolutional neural network (CNN). Goodfellow, Bengio, and Courville [83] describe CNNs as neural networks used for processing data with grid-like topology, like time-series data. A CNN includes at least one convolutional layer as a hidden layer. A convolutional layer transforms input data into sparse representations by sliding a minimum of one kernel matrix over the input data matrix and calculating the dot product in each step. This sparse representation will be used as input to the next network layer. The last layer of the CNN is a fully connected classification layer that uses the softmax function. CNN's merge the feature extraction step and model training step, which saves computational time so that CNNs can be used in real-time fault classification applications. This has been declared by Ince et al. [71], who use a one-dimensional CNN to detect and classify faults for condition monitoring of a motor setup. This method achieves high accuracy rates and low computational complexity due to the structure of the CNN model. Janssens et al. [69] propose a feature learning approach based on convolutional neural network (CNN) for fault detection of several faults in rotating machinery. Feature learning refers to the process of transforming raw data into a data format in an appropriate form for the intended task. This transformation is done automatically by the neural network and not by experts, as in feature engineering. The proposed approach consists of two pipelines. The first pipeline determines if the data shows rotor imbalance. This is achieved by feature extraction and logistic regression. The second pipeline shall detect four different fault types with a CNN. For comparison, the second pipeline is also implemented with feature extraction and random forest method and SVM method with different kernels. The evaluation results show that random forest performs best for the approaches without feature learning, but the approach with feature learning based on CNN performs better than the random forest approach.
Sparse Representation Classification
Sparse representation classification (SRC) is proposed by Wu et al. [72] for fault classification. As the transfer of a multi-class classification problem into several binary classification problems is a time-consuming process, the SRC method uses training data sets with class labels to build a dictionary. To classify a new data sample, sparse representation is performed. The model finds those entries of the dictionary which represent the new data sample best. For dictionary entries, which represent sparsely, the new data sample with the smallest error defines the new data sample's class [84]. The approach is validated by an experiment on the Tennessee Eastman process.
Support Vector Machines
Support vector machines (SVMs) are used by Laouti et al. [73] to detect 10 different fault types in wind turbines. These include sensor faults, actuator faults, and system faults. For each fault type, one model is trained. All models use the Gaussian radial basis function as kernel function, but each fault obtains different vectors used for classification. Their method is validated by application on the data of a real wind sequence, and the results show that 6 out of 10 faults could be detected with acceptable accuracy. Imbalanced data influences the performance of the SVM method significantly, but this can be handled by assigning weights to training samples or by oversampling of training samples of minority classes [47].
Yan et al. [74] propose a hybrid approach for fault classification using an autoregressive model with exogenous variables (ARX) for data pre-processing and SVM with Gaussian radial basis function as kernel function for the classification of five different fault types and normal state. The ARX model is used to remove variable correlations, hence reducing the used variables. The model is suitable for online applications as parameters are estimated recursively. However, the method is not suitable for applications where faults need to be characterized in a short amount of time, because a time interval is required to detect the impact of a fault on the parameters. The study used a time interval of two minutes. The SVM model adopts the one-against-all algorithm, where an originally two-class SVM classifier is constructed for all pairs of fault classes, respectively. The results are validated by comparing them with several other approaches using variations of data pre-processing and SVM.
Decision Trees
Decision trees are used additionally to SVMs by Demetgul [75] to identify 12 fault types occurring in a didactic modular production system. Several kernels and decision tree methods are used. The test setup provides signals of 8 sensors during normal and fault operation. The results show that SVM achieves test accuracy of 100% for all used kernels except for the sigmoid kernel (52.08%). The decision tree models achieve test accuracy of 100% as well, except for the decision tree trained by Chi-square automatic interaction detection (CHAID) method (95.83%).
Tree-Structured Fault Dependence Kernel
Tree-structured fault dependence kernel (TFDK) is an approach that can be described as a hierarchical version of a large-margin SVM. This method includes fault dependence information into the learning algorithm by assigning tree-structured labels to training data, representing their fault type and severity level. Li et al. [80] use a TFDK as a learning method for the classification of real-time sensor measurements into fault types of building cooling systems and severity levels. The sample training and test data from several fault data sets reduce the data imbalance. The approach is evaluated in a cyber-physical test environment equal to the one used in Li, Hu, and Spanos [34]. Results are compared to other methods like multi-class SVM, decision tree, and neural network, which are all outperformed by the proposed tree-structured method regarding classification accuracy.
Linear Discriminant Analysis
Linear Discriminant Analysis (LDA) is a supervised learning technique that reduces the dimensionality in data while obtaining a maximum amount of information by combining state indicators into so-called discrimination functions [85]. Li, Hu, and Spanos [34] propose a two-stage method for fault classification and diagnosis of building chillers based on LDA. Their method formulates a multi-class classification problem, including seven faulty states and the normal condition. Eight data sets for each fault type and normal condition are well separated in a lower-dimensional space produced by LDA, and each of the data sets forms a cluster. New monitored data will be put into the cluster with the lowest Manhattan distance between the data point and cluster center. An unknown fault is identified if the distance is higher than a threshold. As most sensors produce data continuously, the training data set can be updated in case an unknown fault is detected. The same algorithm is used to classify monitored data into fault severity clusters, where each cluster represents a defined severity level. The approach is evaluated with an experiment on an integrated cyber-physical test environment.
Fault Prioritization
Fault prioritization is the process of deciding which fault must be eliminated first to reduce the overall fault impact on the production output, especially loss of production. To the best of our knowledge, there are no automated methods proposed to prioritize faults in industrial production processes in literature. Papers that handle fault or maintenance prioritization request that fault types are prioritized by risk or severity levels beforehand by experts [34,80,86]. The assignment of newly discovered faults to these prioritized fault types results in an indirect prioritization. The method Failure Mode and Effects Analysis (FMEA), presented in Section 3.3.1 shows how experts can be supported in their decision process of defining the risk and severity levels for the different identified fault types. In the end, the defined priority levels can be assigned to the fault types, and fault classification as shown in Section 3.2 can be used to prioritize faults indirectly.
As an outlook, fault prioritization is also part of other research fields, e.g., software development. Here, bugs or faults will be categorized and prioritized during development. With this aggregated knowledge beforehand, the training of the classifiers is possible in order to rank novel faults, e.g., with natural language processing [87], SVM [87][88][89], Naïve Bayes [88], k-Nearest Neighbors [88,89] and neural networks [88][89][90]. A similar approach is conceivable if the huge amount of data is partially presented and annotated by domain experts in a production site. As a result, the ideas of fault prioritization in software development can also be used to prioritize faults in the Industry 4.0 domain in the future.
On the contrary, for software development ideas, FMEA is already used for fault prioritization in the production domain. For this reason, we narrow the FMEA methods subsequently.
Failure Mode and Effects Analysis
FMEA is an effective method for failure analysis, identification, and classification, as well as risk assessment of these faults [91,92]. This method originated in the 1960s and was used in the aerospace industry for solving problems of quality and reliability of products [93]. Subsequently, the method was also used in the production industry as a risk assessment tool to increase the quality and stability of systems [93][94][95][96]. FMEA has also been used for CPSs [92,97,98]. The representation of FMEA in literature has risen, especially since 2013 [94][95][96]. Some papers are using Failure Mode, Effect and Criticality Analysis (FMECA) if a criticality analysis is included [99,100].
With the FMEA method, several product development steps, including product manufacturing, can be analyzed separately, and potential fault types can be identified and assessed regarding their risk and impact on further manufacturing steps. In product manufacturing, fault types depend on the architecture, the characteristics, and functionalities of the production line so that experts are needed to identify dependencies of the used machines and their potential fault types [94]. After the definition of Risk Priority Numbers (RPNs) for all fault types, an assignment of RPNs to the faults is performed. The RPN enables the comparison of the risks of various machine faults. In the original FMEA, the RPN depends on the numerical assessment, with values 1 to 10, of the fault's severity, occurrence, and detection, where detection gives the probability that a fault is not detected until a failure occurs that impacts the customer [95,101].
However, the RPN experienced some criticism in literature due to some disadvantages that cause the resulting prioritization to be inaccurate [95,102]. The critical shortcomings can be briefly summarized as: (1) the usage of three simple factors (severity, occurrence, detection) does not guarantee that faults with equal risks get an equal RPN and faults with different risks get different RPNs [100,103,104], (2) the difference between RPNs of two faults does not represent their actual risk difference [103], (3) the three risk factors are considered to be equally important, so factor weights are not considered [100,103], and (4) the subjectivity of the RPN factors, caused by the subjectivity of the experts who define the factors for all fault types, is not represented in the resulting RPN and leads to uncertainty [92,100,103].
This criticism caused researchers to develop extensions for the calculation of RPNs so that there are now various algorithms used to define the RPN of faults. An overview of these extensions and algorithms is displayed in Table 3. One simple adaptation of the RPN is the consideration of additional risk factors, which leads to a diversification of resulting risk numbers. Examples for additional risk factors are "expected cost of failure" [91], "environmental factors" [92], and further "economic impact" [92,105]. Additional risk factors work against points of criticism (1) and (2). Another extension comparable to additional risk factors is the splitting of one risk factor into several sub-risk factors. This leads to a more precise definition of the original risk factors because more diverse aspects are considered. An example is the partition of the severity risk factor. To define the severity of a fault, both technical and economic aspects should be considered [105,106]. Table 3. Overview of extensions for RPN calculation.
Extension Details References
Additional risk factors e.g., expected cost, cost of failures, weight of corrective actions, uncertain risk factors, environmental factors, economic safety [91,92,105,[107][108][109][110] Usage of sub-risk factors e.g., severity levels from various perspectives like technical or economical [104][105][106] Fuzzy variables Fuzziness used in variables to represent uncertainty and imprecise risk factors [111][112][113] Multi-criteria decision methods Defining risk based on multiple conflicting criteria [108,[113][114][115][116][117][118][119] To handle the subjectivity of defining the risk factors, some authors suggest using fuzzy logic as an extension of FMEA [112,117,119]. In these approaches, the risk factors are defined via linguistic variables like low, medium, and high instead of numerical variables [111][112][113]. Those variables are then ranked with fuzzy numbers, which means that the number has one assigned value and multiple values with probabilities, so that the severity can be ranked as, e.g., {30%: 3, 50%: 4, 20%: 5} instead of using one numerical value on the scale of 1 to 10 [111,112].
Multi-criteria decision-making (MCDM) methods are also widely used to overcome the shortcomings of original RPNs. MCDM is a branch of operations research that supports experts in their decision-making process [120]. Examples for MCDM methods are the analytic hierarchy process (AHP) [108,117,118] and the technique for the order of prioritization by similarity to ideal solution (TOPSIS) [113,117,119].
Discussion
The developed process of fault handling, as illustrated in Figure 1, begins with data collection, pre-processing, and feature handling. Research provides numerous attempts to handle these steps. For each of the described fault handling methods, other types of preparations are necessary, denoting why it has not been described in this review in the first instance. However, fault handling requires representative data to operate effectively and generate sensible output. So while following the described fault handling process, sufficient work should be invested into accurate data. Only very few data sets in the context of industrial applications and smart manufacturing are available for open access [17], which exacerbates validation of the defined fault handling process significantly. Therefore, validation shall be fetched in a later stage.
With the increase of computational resources at the beginning of the 21st century, the focus seems to shift from model-based to data-driven fault detection methods. Due to Industry 4.0 and further digitalization of processes, the complexity of manufacturing systems also enhances, making model-based and knowledge-based approaches challenging to handle. A more significant part of fault detection has been carried out by variants of neural networks and deep learning regarding data-driven methods. We see a trend that neural networks and deep learning will be used primarily in the future, as computational resources are not that critical anymore. However, we showed that other promising methods exist, which can be used for some use cases in the industry. Ensemble methods tend to improve the results in various ways compared to single methods. We have given a combination of kNN and Naïve Bayes, which reduces non-informative data sensitivity. Especially TEDA and OS-LSSVM can detect faults very fast, which is crucial for realtime applications. We assume that a combination of deep learning and other methods to an ensemble would speed up the fault detection process by providing the benefits of deep learning.
The situation for fault classification behaves very similarly compared to fault detection. Increased computational resources facilitate and promote neural networks and deep learning. Autoencoders became a standard unsupervised learning method for fault classification. Additionally, fault classification requires fault detection beforehand, in which variants of neural networks will be implemented more often. Some papers also showed that one deep learning algorithm can handle a combination of fault detection and classification [121,122]. So further applications on fault classification by neural networks and deep learning are likely to be implemented. Beneath deep learning, SVMs are still broadly used for fault classification, often but non-exclusive for supervised learning tasks. Some tree-like algorithms such as decision trees and random forests have been applied for fault classification, which often explains its results more naturally than deep learning. TFDK provides an interesting approach to classify faults by using severity levels. This algorithm may also be used in fault prioritization, which significantly fastens the fault handling process. To the best of our knowledge, this method has not been implemented in another context, so it may still be an open issue for future research on fault handling.
Fault prioritization in an industrial context is difficult to achieve, as non-statistical algorithms are rare. FMEA is an effective method extended frequently to reduce the impact of criticized disadvantages. The method has proven its operational capability in various industrial environments. However, this method is statistical and expensive regarding both time and work. The method cannot be automated, as it requires a high degree of knowledge within the domain. The first attempts to design automated fault prioritization have been made, but not in Industry 4.0. We see a vast potential to transfer knowledge from other disciplines into the manufacturing domain. This is crucial as factories tend to become more complex in the process of digitalization, and only efficient prioritization prevents production outages.
As discussed earlier, some methods cover more than one phase of the mid-tier process. Nevertheless, no known method covers all three steps of fault detection, fault classification, and fault prioritization. This is basically due to missing algorithms for fault prioritization. However, the chances are that a methodology can be developed with automated prioritizing methods, which covers the whole mid-tier fault handling process in the future. Such a methodology would be a step forward, especially with fully automated smart manufacturing systems.
For fault amendment, innumerable methods exist, as these methods strongly depend on the results of the preceding mid-tier process and the experience of the operator's personnel. So, generalization of this process step is not easy to achieve and, therefore, will not be covered in this paper. An automated approach regarding fault amendment is also challenging to realize. With new knowledge regarding additional non-statistical fault prioritization methods with certain automated aspects, fault amendment needs to be re-evaluated and possibly leaves additional prospects for future research.
Conclusions
We structured fault handling in the context of Industry 4.0 by defining the process to scope future discussions in the research field. In addition, the requirements for fault handling are defined. Moreover, a survey is provided, which was structured by the presented process. Here, the survey is focused on the mid-tier fault handling processes.
Only selected examples for each category are provided for each sub-process because various types and architectures exist for each method. Therefore, the scope of this work is to provide guidance and an overview of current state-of-the-art fault handling techniques rather than to claim completeness. The categories of each sub-process are designed to identify research for further reading. Evaluated research was not always in the context of CPSs and may miss certain information about the used evaluation environment, which made it impossible to choose publications according to the presented requirements. Future attempts should point out how methods can be assessed to meet the requirements of industrial processes more precisely. The research also showed that requirements differ for each industrial plant due to high specialization and possibilities in configuration. Therefore, methods should be validated in the area of the industrial process before implementation.
We have also identified techniques outside of Industry 4.0 that can give impetus to the fault handling process, especially fault prioritization, in the future. Methods from the field of software development may be useful to classify and prioritize novel faults. Compared to fault detection and classification methods, automated methods of fault prioritization could not be identified in the literature. Here, many authors state that experts are needed to prioritize faults due to a high individuality of the faults. Therefore, the prioritization of faults solely accomplished by statistical learning methods is unlikely. FMEA was presented in this paper as a method to support experts in their decision-making regarding machine risks. Support methods for experts should be further investigated and optimized, for example, by more thorough experts interviews. Another unexplored path is fault amendment, and the first promising work has been done with the automated reconfiguration of plants in a failure state, but the field beholds much potential. Future work of our defined process encompasses the definition and validation of non-statistical fault prioritization methods in Industry 4.0. With an accurate data set of industrial origin, the described process can be validated and adapted if necessary. Finally, the development of a methodology that covers the whole mid-tier fault handling process for an industrial plant can be performed. The process and the overview given in this work should be seen as a starting point for the definition of fault handling in the Industry 4.0 domain. Each part is open to future research and may provide guidance. Funding: We acknowledge support by the Open Access Publishing Fund of Hochschule Darmstadt -University of Applied Sciences.
Conflicts of Interest:
The authors declare no conflict of interest. | 11,081 | 2022-03-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Multiple Swarm Fruit Fly Optimization Algorithm Based Path Planning Method for Multi-UAVs
: The path planning of unmanned aerial vehicles (UAVs) in the threat and countermeasure region is a constrained nonlinear optimization problem with many static and dynamic constraints. The fruit fly optimization algorithm (FOA) is widely used to handle this kind of nonlinear optimization problem. In this paper, the multiple swarm fruit fly optimization algorithm (MSFOA) is proposed to overcome the drawback of the original FOA in terms of slow global convergence speed and local optimum, and then is applied to solve the coordinated path planning problem for multi-UAVs. In the proposed MSFOA, the whole fruit fly swarm is divided into several sub-swarms with multi-tasks in order to expand the searching space to improve the searching ability, while the o ff spring competition strategy is introduced to improve the utilization degree of each calculation result and realize the exchange of information among various fruit fly sub-swarms. To avoid the collision among multi-UAVs, the collision detection method is also proposed. Simulation results show that the proposed MSFOA is superior to the original FOA in terms of convergence and accuracy.
Introduction
Unmanned aerial vehicles (UAVs) have become an area of great concern to many governmental and military organizations around the world. The autonomy level of the UAVs depends on the methodology used to control the vehicle and plan its flight path [1]. Therefore, path planning, which is to generate a feasible path between two points that has an optimal or near-optimal performance satisfying constraint conditions, is one of the most important challenges in the autonomous navigation process of UAVs [2]. Typically, in increasingly complex mission environments, only one UAV usually cannot meet the mission requirements, so there are often multiple UAVs (multi-UAVs) on the same mission at the same time [3]. This paper mainly discusses the path planning of multi-UAVs cooperation. To solve the cooperative path planning problem of multi-UAVs, an approach is proposed: Priority is set to each UAV. The first priority UAV does not need a plan. Its sub-goal is the final goal. The UAV with second priority plans the path to avoid collision with the first priority UAV. The UAV with the third priority needs to plan its path to avoid collision with the former UAVs [4]. It is proven that finding the optimal path is an non-deterministic polynomial complete problem [2]. As the scale of the problem increases, the complexity of the problem increases rapidly. In many existing works, the optimal path is always considered as the shortest path, and the deterministic search algorithm is used to find the shortest path. However, the optimal path of the UAV should be considered to be related to the Fruit flies have a superior sense of smell and vision [18]. In their search for food, fruit flies can not only use their sense of smell to detect smells in the air, but also can use their sense of sight to locate food and other flies. In the FOA, the flies first use their sense of smell to find the direction of the food source, and then fly toward the direction with the greatest smell concentration. After that, the fruit flies use their vision to find the location of the food and other flies and fly to the direction obtained by their vision.
Without loss of generality, the global optimization problem in the continuous domain can be summarized as minF(x) s.t.x j ∈ [X min , X max ], j = 1, 2, . . . , D where F(x) is the objective function with the D-dimensional decision variable x = (x 1 , x 2 , . . . , x D ) and X min and X max are the lower and upper bounds of the decision space, respectively.
Appl. Sci. 2020, 10, 2822 3 of 21 The parameters of FOA are the population size M pop and the maximum number of iterations NC. Firstly, the fruit fly swarm location (x best , y best ) is randomly initialized in the decision space. The main process of the FOA contains two parts, namely olfactory search and visual search [1,18].
(1) Olfactory search In the olfactory search process, the fruit fly swarm generates in total M pop new locations of the food source that is randomly around its current location using random strategy, which is given by x i = x best +random(−1, 1) y i = y best +random(−1, 1) (2) where x i and y i , i = 1, 2, . . . , M pop , denote the location coordinate of the i-th fruit fly and random(−1,1) is the random value in the range of (−1,1). For each location produced in the olfactory search process, its smell concentration judgement S i is the reciprocal of the distance between the location and the origin, as follows In the FOA framework, the smell concentration judgement S i corresponds to the candidate solution of the objective function F(.) in the decision space. The smell concentration of the food source can be calculated using the smell concentration judgement as follows (2) Visual search The visual search process of the fruit fly swarm can be regarded as a greedy selection procedure, where the fruit fly swarm observes all the locations generated via the above olfactory search and finds out the best location that has the minimum smell concentration, as follows Then the best smell concentration smell index generated via the olfactory search is compared with the smell concentration smell best of the original fruit fly swarm location (x best , y best ). If smell index < smell best , then the original location (x best , y best ) will be replaced by the newly generated location (x index , y index ) and the fly swarm will fly towards the new location by using the visual search. The visual search can be described as follows The olfactory and visual search processes are repeated many times until a program termination condition is reached. The general process of the FOA in solving the minimum problem minF(x) is described in Algorithm 1. Randomly initialize the fruit fly swarm's location x best , y best in the search space. /* Iterative search */ 2 for n = 1: NC 3 /* Olfactory search */ for i = 1: M pop 4 x i = x best + random(-1,1) 5 y i = y best + random(-1,1) /* Calculate the smell concentration */ if smell Index is smaller than smell best of the fruit fly swarm's location x best , y best then
12
Fruit flies move to the location (x Index , y Index and update the smell concentration: 13 x best = x Index 14 y best = y Index 15 smell best = smell Index 16 end if 17 end for
UAV Path Planning Method Based Modified Fruit Fly Optimization Algorithm
In this section, the MSFOA is proposed to enhance the searching ability of the original FOA and is applied to the path planning for multi-UAVs. The symbol description of the main variables appearing in the proposed MSFOA is listed in Abbreviations.
Motivation
The FOA converges quickly when applied to a single UAV path planning. Yet when it is applied to planning the path of multi-UAVs, it is easy to get lost in the local optimal solution. The original FOA is random when searching the solution, which means the convergence speed is fast in the early stage, but it is difficult to converge towards the optimal solution in the later stage. When the fruit fly's path is near the optimal solution, the algorithm may ignore the path, because the path length and the threating level and other constraints are considered. The optimal path is often between the edge of the threating cylinder or the valley. When the fly finds a path near the optimal solution, the path usually falls into the threating cylinder or through the interior of the mountain. This path with a high smell concentration is easy to eliminate but near the optimal solution. When the algorithm is used to plan the path for a simple scenario, it can be repeated many times to get the optimal solution, but when the algorithm is applied to some complex scenarios, such as multi-UAVs path planning or path planning in complex scenarios, the convergence ability of FOA is obviously insufficient.
In FOA, there is no communication between fruit flies and the direction of the path depends only on the given random direction and the random distance, so the ability to converge in solving complex problems is insufficient. In order to overcome the shortfalls of the FOA, increasing the searching ability and communication between fruit flies can improve the convergence ability of the algorithm. At the same time, in order to avoid the problem of possible collision when planning multiple UAVs paths, this algorithm also adds the method of checking for collisions.
Multi-Swarm with Multi-Tasks Strategy
The searching process can be separated by global searching and local searching. Global searching occurs in the early stage of searching, while local searching occurs in the end stage of searching. In this paper, global searching and local searching are considered as different tasks, which should employ different searching strategies. This paper uses a threshold to confirm the two different stages. When the cost is larger than the threshold value, the algorithm is in the global searching period, when the cost is smaller than the threshold value, the algorithm is in the local searching period.
In the global searching period, it is important to enhance the searching space to improve global searching ability since the size of the searching space has a significant impact on the results. FOA only uses the single swarm approach to search for all fruit flies from a single location, thus when the searching space is small, the searching capacity is insufficient. In MSFOA, the searching space of the algorithm can be expanded linearly by using multi-swarm fruit fly at the same time and starting from many different positions. The fruit fly population M pop is divided equally into G swarms. G is the total amount of swarms. Every fruit fly intends to have a big step in searching as shown in Equations (10) and (11). sin((π/2) * random(−1, 1)) is introduced to map random(−1, 1) near −1 and 1.
where X best are the concentration position of the g-th swarm. In the local searching period when fruit fly swarms are near the global optimum, it is important to reduce searching space for further convergence; fruit flies should pay attention to the points near themselves to have a better result. The step of searching is dwindled.
Competitive Strategies of Offspring
In order to avoid multi-swarm falling into local optimum, each swarm is independent in visual search and olfaction search, while the information exchange occurs in the competition of offspring, which is explained in detail below. In this way, not only can the multi-swarms not get into the local optimum at the same time, but also the information exchange between the species cannot be guaranteed.
The competition strategy of the offspring uses the optimal individual to cross with other flies at random to generate a new swarm at the same scale and select a new optimal individual. That is, the swarm in the same number has two optimal individuals, one comes from the old swarm, the other from the new swarm. The optimal solution of the swarm is obtained after competition between two optimal individuals.
The effect of this method is when fruit fly A is near the optimal solution but its smell concentration is high, but the individual from A hybridized with random fruit fly B has the path information of both fruit fly A and B; the path information of the fruit fly can be obtained by distributing the weight of the path data of the two flies. This strategy can increase the probability that the fruit fly near the optimal Appl. Sci. 2020, 10, 2822 6 of 21 solution will be selected by the algorithm. The procedure for generating the i-th new individual of the g-th swarm is as follows: where random(1, G) produces random integers between 1 and G. X new, i mean the position of the i-th fly in the g-th new swarm. coe 1 and coe 2 are the weights of parental path data, satisfying: The calculation of best smell concentration is given by: Then all the fruit flies in every swarm fly to the point with the minimum smell concentration of each swarm. If the smell concentration of the offspring is less than the smell concentration of their best ), then: best and y Set the number of the fruit fly population M pop , the max number of iterations NC, swarms' initial position [x best , y best ], swarm number g, the threshold, genetic coefficient of hybridization coe 1 and coe 2 , the parameter R. 2 for nc = 1: NC 3 for g = 1: G /* Multi-swarm with multi-task searching strategy */ 4 end for 16 end for /*Competitive strategies of offspring*/ 17 for g = 1: end for 25 end for /* Competitive strategy of offspring */ 26 for g = 1: G 27 Competition of offspring and visual searching in each swarm by formulas (18) and (19) 28 end for 29 end for /*Output*/ 30 Find the best individual from the G swarms and export it.
Problem Modeling
UAV path planning needs to adapt to current terrain and physical world constraints, such as terrain undulation, gravitational acceleration, etc. In addition, the path of the UAV cannot exceed the limits of the UAV itself, such as turning radius, glide angle, flying speed, minimum flying height, and so on. Within these limits, an optimal flight path is planned for the UAV, which can avoid a specific area and reach the target point with less energy consumption in a shorter time. The collision risk between UAVs needs to be considered in the three-dimensional (3D) cooperative path planning of multi-UAVs.
Path representation is a key problem in path planning, and the grid-based map [15] and spline curve [1,5] are two common methods. Zhang proves that by using the B-spline-based path representation strategy, the path can be constructed using a relatively smaller number of parameters than using a complete geometric description of the path [1]. Thus, in charting our course, the algorithm uses very few control points to record the path. When the algorithm needs to use the full path, it will use the control points to generate the whole path.
In this paper, the objective function divides the path constraints into the following categories: path length, threat cost, altitude cost, terrain constraint, corner constraint, and descent angle constraint. All the constraints constitute the objective function. The algorithm uses the objective function to express the quality of a path. Before calculating the objective function, it is necessary to generate a complete path with path points before calculating. The objective function is calculated as follows: where the objective function includes the altitude cost JH, the length cost JL, the threat cost JT, the terrain constraint CH, the corner constraint CT, the climb down constraint CG, and the collision risk JP, which are described in detail as follows.
The altitude cost is determined by all points along the path, which reflects the flying altitude of the UAV above the ground. When flying at a low altitude, the UAV can benefit from terrain masking to avoid unknown radars [1]. Suppose the whole UAV path is represented by k points, if the variable z n is the height of the n-th point in the path, and h min is the minimum flying height of the UAV, then the altitude cost JH adds up the height of all the points along the path on the ground as follows: The path length cost JL reflects the flight length of the entire UAV path, which calculates and adds the distance between any two adjacent points to get the total length as follows: where the variable x n is the x coordinate of the n-th point in the path, y n is the y coordinate of the n-th point in the path, and z n is the z coordinate of the n-th point in the path.
The terrain constraints are determined by the length of the path in the underground section. This symbol is used to punish those paths that are not entirely on the ground. The variable p is the information of the map, and function H(p) means the height of the current point in map p.
Appl. Sci. 2020, 10, 2822 9 of 21 In this paper, the threat around the threat point is only related to the distance, so the threat area is approximated as a cylinder, within a certain range; the closer to the threat point, the greater the degree of threat. The variable n r is the amount of threaten points. T de (i) is the coordinate of the i-th threaten points.
The minimum angle of turning is determined by the maximum speed and gravitational acceleration. The turning angle of each point in the path is calculated. If current turning angle is greater than the max turning angle, and the turning angle constraint increases iteratively. Iterations are added to quickly filter out individuals who do not exceed the limit. The calculation is as follows: where ϕ k denotes the turning angle in the n-th point in the path and ϕ max denotes the maximum turning angle of the UAV.
CG is used to constrain the slope at the current point to meet the maximum climb slopes α and minimum climb slopes β.
where α n = −1.5377 × 10 −10 z 2 n − 2.6997 × 10 −5 z n +0.4211 β n = 2.5063 × 10 −9 z 2 n − 6.3014 × 10 −6 z n − 0.3257 The cost JP is used to assess the risk of collisions between UAVs. The risk of collision between UAVs is determined by the time it takes to reach two points very close together. When planning the path of multi-UAVs, first, these UAVs are numbered according to the degree of importance and then the UAVs with the former number are given priority of passage on the map because each UAV has to avoid the UAV with greater priority.
where PC is the penalty factor, N is the number of current UAV, and N is the number of points in the flight path generated by the B-spline curve. ∆ time_k is the time difference between the two UAVs when they arrive at the point where a possible collision could occur.
Application of MSFOA to Multi-UAVs Path Planning
(1) Collision detection method Collision avoidance is an important part of multi-UAVs path planning. Due to the huge space in the 3D map, this method only detects the possibility of collision after the path is generated. When the n-th UAV's flight path is going to check for collision, this method will check it in pairs with the previous n − 1 UAV's flight path. The n-th path is called the path to be detected, and the previous n − 1 is called the air_path n−1 . For each point on the detection path, the distance from each point between the detection path and air_path n−1 is calculated. If the distance is less than the minimum safe distance, the two points will be called a possible collision point, and the path will be punished. The mathematical descriptions are as follows: where JP i is the collision risk between the i-th UAVs and the m-th UAVs, the parameter C is the time penalty factor, and t carsh is the time difference between the two UAVs at the point of possible collision, determined by the following formula: (2) MSFOA based multi-UAVs path planning In order to further explain the MSFOA based path planning, the specific steps of multi-UAVs path planning for MSFOA is elaborated. The algorithm records the location of the d-th control point of the i-th individual fruit fly in swarm g with X where S (g), j i is the smell concentration judgment. The path is generated by where P i is the coordinate on the Z axis of all the control points along this path. k represents the number of control points, where k values range from 2 to D-1. Then add the starting and ending points: where X S,N , Y S,N , Z S,N is the beginning of the path of the N-th UAV, and X T , Y T , Z T is the end of the path of the UAV. There is no essential difference between the multi-origin and multi-destination model in this paper. The MSFOA for multi-UAVs path planning is implemented as follows: Step 1: Determine information of the UAVs mission, including starting coordinates (X S, N , Y S, N , Z S, N ) and ending coordinates (X T , Y T , Z T ). The number of UAVs required for the mission Num, maximum velocity v max , maximum turn overload n max , and minimum altitude h min . Mission map information p, boundary P x max , P y max , P z max , and the number D of control points that each UAV needs to plan out.
Step 2: Identify enemy ground weapons, including threat types such as radar, missiles, anti-aircraft guns, weapon positions (X threat, j , Y threat, j ), and their respective threat ranges.
Step 3: Set up the parameters of MSFOA, including maximum iteration number NC, the number of new swarms G, the fruit fly population size M pop , the threshold, the searching step dwindled parameter R, and cross genetic coefficient coe 1 and coe 2 .
Step 7: Generate the random locations for the food source using the olfactory according to best is larger; otherwise, generate the new locations according to Equation (35), as follows best +R * random(−1, 1) (35) Step 8: Update the smell concentration judgement according to the formula below: old, i is bigger than 1, jump to step 7, otherwise jump to step 9.
Step 9: if d < D, let d = d + 1, jump to step 7, otherwise jump to step 10.
Step 10: Generate the path of this fruit fly path i using Equations (31)-(33). Update smell concentration by the formula below smell (g) where J is described in Equation (20). If i < M pop /G, then let i = i + 1, d = 1 and jump to step 7, otherwise jump to step 11.
Step 11: Find out the old best in the g-th swarm (x oldbest ) using Equations (12) and (13). If g < G, then let g = g + 1, d = 1, i = 1 and jump to step 7, otherwise jump to step 12.
Step 13: Generate fruit fly in new swarms according to the equation below: Step 14: Update the smell concentration judgement according to the equation below: new, i is bigger than 1, jump to step 6, otherwise jump to step 8.
Step 15: if d < D, let d = d + 1, jump to step 13, otherwise jump to step 16.
Step 16: Generate the path of this fruit fly path i by using Equations (31)-(33). Update smell concentration by the equation below: If i < M pop /G, then let i = i + 1, d = 1 and jump to step 13, otherwise jump to step 17.
Step 17: Find out the index of new best in the g-th new swarm Index new by Equation (17).
Step 18: All the fruit flies in the g-th swarm fly to the location with the minimum smell concentration according to Equations (18) and (19). If g < G, then let g = g + 1, d = 1, i = 1 and jump to step 13, otherwise jump to step 19.
Step 19: If nc < NC, let nc = nc + 1 and jump to step 5, otherwise jump to step 20.
Step 20: If num < Num, generate and output the path of the num-th UAV then let num = num + 1 and jump to step 5; otherwise, end and output the UAV paths.
Results and Discussion
This section discusses the simulations used to assess the performance of our proposed algorithm in solving the UAV path planning problem. Seven algorithms are tested under three scenarios, including MSFOA, FOA, PSO, DE, ABC, IFFO, and MFOA. The 3D stereo displays of three scenarios are shown in Figure 1. The algorithm is coded using Matlab-2018b. Owing to their stochastic nature, evolutionary algorithms may arrive at better or worse solutions than they previously reached during their search for new solutions. For this reason, every tested algorithm is run 40 times independently for each scenario and the statistical results are used for performance evaluation and comparison. We coded the above algorithms and set the parameters according to their references. The main parameters of the seven tested algorithms are detailed in Table 1. The best values of each scenario are highlighted using boldface. The parameter configurations are all based on the suggestions in the corresponding references; where these values have a fair comparison, the same maximum iteration number is used for all algorithms as the stopping criterion, while the population size is set as 100 for all algorithms.
DE [10]
M sol = 100 The scaling factor F is set as a random value in [0.2, 0.9], the crossover factor cr = 0.9. ABC [11] N e = 50, N u = 50 The largest local searching times Limit = 20.
Above all, these experimental results demonstrate that the MSFOA still maintains a higher performance than the FOA, PSO, DE, ABC, and two other modified FOA versions. For the first scenario, all seven tested algorithms find the safe flight path for UAVs. It can be seen that the best path generated by MSFOA flies the smoothest. In comparing these results in the first scenario column in Table 2, it is observed that the MSFOA obtains the minimum best among the algorithms, which indicates that MSFOA has the most powerful optimization ability in the statistical sense. Figure 3 displays the experimental results of the second test scenario. For this test scenario, DE and IFFO come into the threat areas. From Table 2, it can be seen that MSFOA achieves the smallest mean cost value. Furthermore, the result of the third scenario is shown in Figure 4. It is obvious that MSFOA has the best path for UAVs. In the third scenario column of Table 2, the cost of MSFOA is significantly decreased in comparison to the other algorithms.
Above all, these experimental results demonstrate that the MSFOA still maintains a higher performance than the FOA, PSO, DE, ABC, and two other modified FOA versions.
The convergence curves of the average best cost values are displayed in Figures 5-7. In the first scenario, it can be seen that DE, FOA, and IFFO show inferior convergence. MFOA achieves a faster convergence speed and smaller cost than MSFOA in the early stages. However, in the later iterations when most algorithms come to stagnation, MSFOA still shows the ability to search for better solutions. The searching range decreases as the cost value decreases below the threshold, which can improve the local searching ability near the optimal solution, and thus leads to the best global convergence of the algorithms.
In Table 3, the best, median, and average of the cost values of the sum for three UAVs are recorded. The best values are highlighted using boldface. From this table, it can be seen that MSFOA achieves the smallest cost value in the first and third scenario and the second smallest in the second scenario. MFOA obtains a smaller average and median in the second scenario, but the distributed interval of solutions obtained by MFOA in Figure 3 is evidently worse than that obtained by MSFOA. Comprehensively comparing the statistical data of all algorithms in the three scenarios, MSFOA still shows its superiority to other algorithms in terms of searching ability and stability. The convergence curves of the average best cost values are displayed in Figures 5-7. In the first scenario, it can be seen that DE, FOA, and IFFO show inferior convergence. MFOA achieves a faster convergence speed and smaller cost than MSFOA in the early stages. However, in the later iterations when most algorithms come to stagnation, MSFOA still shows the ability to search for better solutions. The searching range decreases as the cost value decreases below the threshold, which can improve the local searching ability near the optimal solution, and thus leads to the best global convergence of the algorithms.
In Table 3, the best, median, and average of the cost values of the sum for three UAVs are recorded. The best values are highlighted using boldface. From this table, it can be seen that MSFOA achieves the smallest cost value in the first and third scenario and the second smallest in the second scenario. MFOA obtains a smaller average and median in the second scenario, but the distributed interval of solutions obtained by MFOA in Figure 3 is evidently worse than that obtained by MSFOA. Comprehensively comparing the statistical data of all algorithms in the three scenarios, MSFOA still shows its superiority to other algorithms in terms of searching ability and stability.
Conclusions
This paper proposed an MSFOA for cooperative path planning of multi-UAVs over threedimensional rugged terrain. In order to ensure the convergence speed and avoid local optimization, we have improved this in many ways. The multi-swarm strategy is used to increase the searching ability of the algorithm. In order to make use of any searching information effectively, we proposed the competition strategy of offspring, which ensures the efficient use of searching information and avoids falling into local optimum. In order to plan the flight path for multi-UAVs and avoid the collision between UAVs, we proposed a method to detect the collision between UAVs. Several simulations have shown the effective performance of the proposed approach. This paper proposed a method which can be used for multi-UAV path planning and can solve the high-dimensional function optimization problem. It can be widely used in the field of robotics. In addition, the parameters and sub-swarm amount usually affect the searching performance, and this may be improved in further work.
Conclusions
This paper proposed an MSFOA for cooperative path planning of multi-UAVs over threedimensional rugged terrain. In order to ensure the convergence speed and avoid local optimization, we have improved this in many ways. The multi-swarm strategy is used to increase the searching ability of the algorithm. In order to make use of any searching information effectively, we proposed the competition strategy of offspring, which ensures the efficient use of searching information and avoids falling into local optimum. In order to plan the flight path for multi-UAVs and avoid the collision between UAVs, we proposed a method to detect the collision between UAVs. Several simulations have shown the effective performance of the proposed approach. This paper proposed a method which can be used for multi-UAV path planning and can solve the high-dimensional function optimization problem. It can be widely used in the field of robotics. In addition, the parameters and sub-swarm amount usually affect the searching performance, and this may be improved in further work.
Conclusions
This paper proposed an MSFOA for cooperative path planning of multi-UAVs over three-dimensional rugged terrain. In order to ensure the convergence speed and avoid local optimization, we have improved this in many ways. The multi-swarm strategy is used to increase the searching ability of the algorithm. In order to make use of any searching information effectively, we proposed the competition strategy of offspring, which ensures the efficient use of searching information and avoids falling into local optimum. In order to plan the flight path for multi-UAVs and avoid the collision between UAVs, we proposed a method to detect the collision between UAVs. Several simulations have shown the effective performance of the proposed approach. This paper proposed a method which can be used for multi-UAV path planning and can solve the high-dimensional function optimization problem. It can be widely used in the field of robotics. In addition, the parameters and sub-swarm amount usually affect the searching performance, and this may be improved in further work.
Author Contributions: K.S. and X.Z. proposed the algorithm and wrote the paper, as well as analyzed the data; K.S. and S.X. conceived and performed the experiments. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The x coordinate of the location of the g-th old swarm.
The y coordinate of the location of the g-th old swarm.
The x coordinate of the i-th fruit fly in the g-th old swarm.
The y coordinate of the i-th fruit fly in the g-th old swarm.
The smell concentration judgement of the i-th fruit fly in the g-th old swarm.
The smell concentration of the i-th fruit fly in the g-th old swarm.
Index (g) old
The index of the fly with the best smell concentration in the g-th old swarm.
x (g) oldbest The x coordinate of the fly with the best smell concentration in the g-th old swarm.
The y coordinate of the fly with the best smell concentration in the g-th old swarm.
The x coordinate of the i-th fruit fly in the g-th new swarm.
The y coordinate of the i-th fruit fly in the g-th new swarm.
The smell concentration judgement of the i-th fruit fly in the g-th new swarm.
The smell concentration of the i-th fruit fly in the g-th new swarm.
(g) new
The index of the fly with the best smell concentration in the g-th new swarm. G The number of swarms. | 7,938 | 2020-04-19T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
A Comparative Analysis of the Design and Implementation of the Twin Peaks Model of Financial Regulation in South Africa and Australia
In response to the 2007-2009 global financial crisis, South Africa, like most other countries, embarked on a journey of financial regulatory reform. This transformation has seen the country shift away from a silo approach, where various sectoral regulators supervise different financial institutions, towards the Twin Peaks model of financial regulation. The term “Twin Peaks” refers to the separation of regulatory functions between two independent regulators. For instance, one regulatory agency is responsible for the supervision of the safety and soundness of financial institutions, and the other is focussed on the regulation of business conduct. The Twin Peaks model is designed to provide the benefits and efficiencies of an integrated approach while simultaneously addressing inherent conflicts between the objectives of the safety and soundness of the financial system and consumer protection. The Financial Sector Regulation Act provides for the Twin Peaks model in South Africa. This Act positions South Africa as one of the first developing countries to adopt the model. On the other hand, Australia is the pioneer of the model since it implemented it in 1998. Its implementation in Australia, much as in South Africa, involved the separation of regulators for prudential soundness on the one hand and market conduct and consumer protection on the other. In this article, the authors provide a comparative analysis of the design and implementation of the Twin Peaks model in Australia and South Africa. They argue that while there are minor differences between the design and implementation of this model in Australia and South Africa, the model was customised and implemented to align with the specific needs of each country.
ensuring the solvency and soundness of financial institutions, the promotion of fairness, efficiency and transparency of financial markets, and the promotion of a stable financial sector.The model was designed to provide all the benefits and efficiencies of an integrated approach while simultaneously addressing the inherent conflict between the objectives of safety and soundness and consumer protection.Under the Twin Peaks model, it is essential to ensure that the objectives of each regulator and the boundaries, or ''regulatory perimeters'', between them are clearly defined.This is crucial especially when a market participant is subject to the regulation by both regulators.Proponents of the Twin Peaks model such as Mc Vea 15 also argue that it helps insulate prudential supervisors from excessive, intrusive consumer-oriented approaches. 16The Twin Peaks model seeks to counter problems associated with innovations in product design and distribution which had "blurred" the boundaries between financial institutions and their financial products. 17For instance, challenges caused by different financial institutions such as banks merging with insurers and/or merchant banks merging with securities traders, resulting in some regulatory gaps in the financial sector. 18This integration between different types of financial entities posed challenges to regulatory frameworks, and oversight by regulatory bodies became unclear and insufficient.
The strength of the Twin Peaks model lies in the fact that the two peak regulators are more likely to have dedicated objectives and clear mandates to which they are exclusively committed. 19In other words, the twin peak regulatory agencies have clearly defined objectives enabling them to carry out their work expediently and creating an obligation of accountability on each of them. 20Moreover, the Twin Peaks model combats the duplication of roles between the regulatory agencies.The prudential regulator exists to complement the regulatory efforts of the market conduct regulator. 21The Twin Peaks model tackles the challenges posed by the growing complexity of financial markets and the constant rise of different financial conglomerates.Furthermore, it seeks to curb the inherent conflict of interest that arises under a single or super regulator model.Such a regulator which combines both prudential and conduct regulation is normally unable to 15 Mc Vea "The impact of Global Financial Crisis" 44-60.objectively fulfil all its conflicting priorities timeously.Different financial sectors often have divergent needs and priorities.Balancing these interests without favouring one sector over another could be more challenging for a single or super regulator.Thus, a single or super regulator could struggle to provide unbiassed and equitable oversight in the financial sector.The likelihood of this occurring under Twin Peaks is lower due to the fact that both the prudential and the market conduct regulators are clearly focussed on their mandates and objectives.Nonetheless, the only weakness of the Twin Peaks model is that it could create challenges of regulatory overlap on dual-regulated entities.
3 Overview of the Australian Twin Peaks model
The history and structure of the Twin Peaks model
The introduction of the Twin Peaks model in Australia was spearheaded by the Wallis Inquiry, which was set up by the Australian government in 1996 to review its financial system. 22The Wallis Inquiry held that innovation in product design and distribution had blurred the boundaries between financial institutions and their different products.It also noted that there was an increase in competition in the financial market and "conglomeration" in financial services institutions. 23Product innovation had enabled different types of financial institutions to offer essentially identical financial products while being subject to different degrees of financial regulation. 24The Wallis Inquiry recommended that a single agency should be established for the regulation of companies, market conduct and consumer protection. 25 implementation of this legislation signified a departure from a single regulator model to a Twin Peaks model, which empowers two regulators to oversee their specific mandates in Australia.Accordingly, Australia consolidated the functions for financial regulation into two regulators, namely the Australian Securities and Investments Commission (ASIC), which is responsible for the regulation of companies, market conduct and consumer protection, and the Australian Prudential Regulation Authority (APRA), which is responsible for prudential regulation.The Reserve Bank of Australia (RBA) is responsible for monetary policy and financial stability, including ensuring a safe and reliable payments system. 28To this extent, Schumlow describes the Australian model as a three-peak model, with each peak created as an independent statutory authority. 29
The role of the APRA
APRA is an independent statutory authority established for the purpose of prudential supervision and for promoting financial stability in Australia.It is a body corporate with perpetual succession and it functions completely outside the remit of the RBA. 30This means that the APRA can enter contracts, sue or be sued, 31 and hold property in its own name, separately from the RBA.The perpetual succession component empowers the APRA to engage in long-term planning and strategy development, knowing that the entity will endure over time.This enables better decision-making and investment in the financial sector.The APRA is able to plan and make longterm regulatory decisions since it has perpetual succession.The APRA took over the responsibilities of eleven separate state and federal financial regulators upon its formation in 1998. 32The APRA Act specifies that APRA was created to oversee the regulation of financial sector institutions in accordance with prudential laws and regulations. 33The APRA may establish prudential standards that should be complied with by regulated institutions. 34These standards provide a number of requirements aimed at promoting the financial soundness, risk management and governance in the Australian financial institutions and financial sector. 35The APRA 2019-20 Annual Report provides that its core functions include the identification and timeous response to the operation and performance of the APRA. 36The APRA Act provides that the APRA executive board should have at least three members and up to five members. 37The five members consist of the chairperson, deputy chairperson, and three other members.The APRA members are appointed by the Governor General, on the advice of the Australian Government, for terms of up to five years. 38The APRA has eight divisions and two independent internal teams.These divisions consist of banking, insurance, superannuation, policy and advice cross-industry, enterprise services, technology, and data division.The two independent internal teams consist of the Chief of Staff Internal Audit Team and the Enterprise Security and Risk Team. 39The Banking division supervises licensed deposit-taking institutions such as banks, credit unions and building societies. 40The Insurance division oversees general insurers, life companies, friendly societies and private health insurance companies. 41he Superannuation division supervises all registered superannuation entity licensees.The Policy and Advice division ensures that APRA's frontline supervisors have the best possible tools to support them in their risk analysis and supervisory interventions. 42This division consists of legal services, policy, advice and approvals, resolution and licensing. 43Crossindustry division combines industry analysts and risk experts to assist frontline supervisors in monitoring individual entities and industries as a whole.Enterprise services help other divisions achieve APRA's strategic goals through shared services functions. 44The internal audit team advises on managing risk across all of APRA and it has an independent reporting line to the audit committee and direct access to the APRA members and executive board.The APRA is accountable to the Australian government and the parliament.It also promotes the stability in the financial sector.specialised divisions make it easier to monitor and assess the performance of the relevant financial institutions in the financial sector.This enables a more accurate evaluation of outcomes and adjustments to improve performance by the APRA.As a result, decision-making is faster and more informed since individuals within a division will have a deeper understanding of the issues and challenges pertaining to their specific domain.Each division hires employees with appropriate expertise for their specific functions to increase efficiency.
The role of the ASIC
The ASIC commenced its operation in 1991, when it was known as the Australian Securities Commission (ASC).Put differently, its name was changed from ASC to ASIC in 1998.The ASIC is the business conduct regulator responsible for promoting market integrity and consumer protection across the financial markets in Australia. 45The ASIC Act provides that the ASIC is responsible for maintaining, facilitating and improving the conduct of financial institutions. 46This Act stipulates further that the ASIC is obliged to focus on financial predictability, reducing business costs, and improving the efficiency and development of the economy. 47In addition, the ASIC ensures that investors and consumers effectively participate in the financial sector. 48It also issues financial licenses and monitors financial services providers to ensure that they operate efficiently, honestly and fairly.This is done through the Australian financial services licensing system. 49he ASIC is empowered to investigate violations of the ASIC Act, initiate the prosecution of minor offences and disqualify people from managing corporations or dealing in financial services. 50e ASIC operates under the direction of three full-time Commissioners appointed by the Governor General in liaison with the relevant Minister. 51he Commissioners report to the Minister through their annual report, briefings, submissions and meetings with the Treasurer or Parliamentary Secretary. 52The ASIC seeks to curb the collapse of financial institutions by ensuring that they comply with provisions of the ASIC Act and certain competency standards.The ASIC also ensures the compliance of financial institutions with client funds and it protects clients' property rights. 53owever, the ASIC does not have an explicit mandate to promote financial stability.Its focus is on market conduct and investor/consumer protection.
In 2010, the ASIC took on the additional responsibility of consumer credit and finance broking, regulating trustees and fulfilling a supervisory function over trading on Australian-licensed equity, derivatives and futures markets. 54 light of the above, it appears that the APRA Act and the ASIC Act provide clear responsibilities and objectives for each regulator.The ASIC promotes confidence and the informed participation of consumers and investors in the financial sector.There is clear demarcation between the roles of the regulators to combat possible overlapping challenges.Moreover, the APRA and the ASIC operate as independent bodies that are free from government or industry interference.To this extent, Australia is considered to have done well during the 2007-2009 global financial crisis because of the clear a delineation of responsibilities between the APRA and the ASIC. 55This approach enabled the ASIC to curb regulatory arbitrage and other challenges posed by the 2007-2009 global financial crisis. 56The ASIC also managed to avoid possible conflicts of interest that could have occurred when prudential oversight and business conduct regulatory functions are concentrated in a single regulator.
The role of the RBA
In most countries the central bank is historically responsible for financial stability, prudential regulation and the supervision of banks. 57However, this has not been the case in Australia since the adoption of the Twin Peaks model of financial regulation in 1998.It is important to note that the Wallis Inquiry recommended that the prudential regulator should be a separate entity from the central bank, though it should cooperate very closely with the RBA in order to collectively maintain financial stability in Australia. 58Thus, the RBA's prudential supervision mandate was transferred to the APRA in1998.Notably, the RBA is still responsible for the promotion of financial stability. 59As part of its financial stability mandate, the RBA assesses a variety of financial and economic data which assists in gauging the robustness of the financial sector regulations. 60Furthermore, the Reserve Bank Act 4 of 1959 was amended in 1998 to establish the Payments System Board in the RBA to promote the safety and efficiency of the Australian payments system.As part of its functions, the RBA takes steps such as maintaining low and stable inflation rates and overseeing sustainable economic growth to ensure that the environment is conducive for better financial stability.The RBA has responsibilities as the lender of last resort in line with its financial stability mandate.
The history and structure of the Twin Peaks model
The Twin Peaks model of financial regulation was adopted by South Africa almost two decades after its adoption in Australia. 61In the wake of the 2007-2009 global financial crisis, the South African government committed itself to a financial regulatory reform agenda that was aimed at strengthening its financial stability.The 2007-2009 global financial crisis highlighted several issues including the need to better coordinate monetary and fiscal policy while minimising systemic risk. 62As a result, the government of South Africa issued a policy paper in 2011 titled 'A Safer Financial Sector to Serve South Africa Better' which outlined weaknesses in the structure and features of South Africa's financial sector indicating that there were various gaps that needed to be addressed. 63The adoption of the Twin Peaks model by South Africa was also necessitated by the country's desire to increase transparency and accountability in its financial markets. 64The South African government acknowledged that market conduct regulations were being neglected. 65 international commitment to a sound and safe regulatory framework. 66The country needed to align its financial sector with the international trends in financial regulation. 67The South African government wanted to explore any potential contagion by following acceptable, robust standards in financial regulation and supervision. 68Unlike in Australia, where the Twin Peaks model was introduced a year after the Willy Inquiry, the implementation of the model was a two-phase process in South Africa.The FSCA seeks to enhance South Africa's approach to consumer protection through financial conduct regulation. 70Additionally, the adoption of Twin Peaks in South Africa was mainly aimed at creating a more resilient and stable financial sector through prudential regulation.
The role of the PA
The PA is one of the pillars that constitute the South African Twin Peaks model.The PA is established in terms of section 32 of the FSR Act.It plays a similar role to that of the APRA in Australia and it ensures that financial institutions operating in the financial sector are safe and financially sound. 71owever, unlike in Australia, where APRA is an independent body, the PA is a juristic person housed in and administered by the South African Reserve Bank (SARB). 72 supervision while the SARB is responsible for macro-prudential supervision and overseeing the financial stability in the financial sector. 73The PA functions as a system-wide prudential regulator that is tasked with overseeing all financial institutions that provide financial products or securities services and market infrastructures. 74It is also obliged to promote sustainable competition in the provision of financial services and financial products.It is also required to cooperate and collaborate with the Competition Commission, 75 and it has supervisory roles over banks, insurers and pension funds. 76e PA is governed by the Prudential Committee, which consists of the Governor of the SARB (as Chairperson), the Chief Executive Officer (CEO) of the PA (who is also a Deputy Governor of the SARB) and other Deputy Governors of the SARB. 77The CEO is appointed by the Governor of the SARB in consultation with the Minister of Finance.A person appointed as the CEO holds office for a term not longer than five years.The structure of the PA comprises four departments, namely Banking, Insurance and Financial Market Infrastructure Supervision; Financial Conglomerate Supervision; Policy, Statistics and Industry Support; and Risk Support.
The role of the FSCA
The FSCA came into operation on the 1 April 2018 in terms of section 56 of the FSR Act. 78The FSCA is obliged to oversee the conduct of financial institutions operating in South Africa. 79It is responsible for protecting financial consumers and promoting confidence in South African financial sector.It achieves this by overseeing the regulation of market conduct for various financial entities such as banks, insurers, financial advisers, intermediaries and investment institutions.The FSR Act provides that the main aim of the FSCA is to enhance and support the efficiency and integrity of financial markets and protect financial customers. 80Like the ASIC in Australia, it is solely an independent market conduct authority separate from the central bank. 81The FSCA is obliged to regulate and supervise the conduct of financial institutions in accordance with the relevant financial sector regulation laws. 82One of the objectives of the FSCA is to promote sustainable competition in the provision of financial products and financial services through cooperating and collaborating with the Competition Commission. 83The FSR Act provides for an executive committee to manage the affairs of the FSCA. 84The executive committee is made up of a commissioner and between two and four deputy commissioners, each with the appropriate expertise in the financial sector. 85The commissioner is responsible for the day-to-day management and administration of the FSCA. 86The commissioner and the deputy commissioners are appointed by the Minister of Finance.The FSR Act provides that a person appointed as commissioner or deputy commissioner holds office for a term determined by the Minister, which may not be longer than five years. 87 5 Overview comparative analysis Africa, the financial sector is also regulated by other role players such as the National Credit Regulator (NCR), the National Consumer Commission (NCC), the National Consumer Tribunal (NCT) and the SARB.In terms of prudential regulations, the SARB monitors monetary policy and financial stability, including payment system safety and reliability. 89In addition, the SARB, inter alia, preserves its traditional role as a lender of last resort and a provider of emergency liquidity assistance. 90On the other hand, the NCR regulates all retail credit provision by credit providers, including banks. 91The NCR operates independently as a credit market regulator.The NCC promotes and protects the interests and rights of consumers in South Africa.
On the other hand, the NCT hears and decides on any matter involving consumers, service providers, credit providers and other aggrieved parties.
It also reviews decisions of the NCT and the NCC.Consequently, the South African Twin Peaks model does not prohibit the existence of multiple regulatory bodies in the financial sector regulation.This is also the case in Australia.A third, unofficial "pillar" of the Australian financial regulatory framework is Australia's central bank.The RBA is responsible for promoting financial stability in Australia.It develops monetary policy to promote and maintain stability in the financial sector.The RBA also ensures the stability, efficiency and competitiveness of the payments system in Australia. 92e prudential regulators are structured differently in Australia and South Africa.For instance, the APRA is a stand-alone independent body 93 that operates outside of the central bank's remit. 94This is not the case in South Africa, where the PA is a subsidiary of the SARB.The rationale for housing the PA in the SARB could be for the purposes of sharing financial resources and related infrastructure. 95It could be also motivated by the need for mutual collaboration and information sharing between the SARB and the PA.This approach is effective only when the central bank is completely independent from political and socio-economic influences and biases.It appears that this is the case in South Africa, where the Constitution provides for the independence of the SARB. 96 prudential supervision.The approach could also enable the prudential authority and the central bank to timeously recognise and manage all potential threats to financial stability.Moreover, incorporating the PA into the central bank could enable it to develop better economic management policies as well as high standards of transparency and accountability. 97hile these are positive developments, they do not solve the problem of blurred boundaries and regulatory overlaps since the PA is not entirely separate from the central bank, as is the case in Australia.
In addition to the above, cooperation and coordination between the two pillars are crucial for the Twin Peaks model to be effective.Regulatory frameworks that divide authority between multiple agencies require strong coordination mechanisms to ensure that issues needing regulatory oversight do not fall through the gaps.Many jurisdictions such as New Zealand and the United Kingdom have adopted a structure of Memoranda of Understanding (MOUs) and have financial stability committees in place to foster collaboration, information sharing and better coordination between regulators. 98Both South Africa and Australia recognise the need for cooperation, collaboration and coordination among financial regulators. 99he Australian Twin Peaks model provides a non-binding system of coordination among financial agencies.The APRA Act 100 provides that APRA should have regard to the desirability of cooperating with other financial sector supervisory agencies and with other agencies specified in regulations when performing and exercising its functions and powers.In relation to this, the Australian financial regulatory framework is largely dependent on informal bilateral coordination mechanisms as well as the Council of Financial Regulators (CFR), an informal organisation with no regulatory functions in the form of MOUs and informal protocols. 101 noted that requirements for collaboration were inserted in the APRA Act after the collapse of HIH Insurance.The collapse was partly attributed to coordination deficiencies between the APRA and the ASIC. 104The APRA Act did not have any cooperation or coordination mechanism at its inception.
Like the position in Australia, the South African Twin Peaks model provides an equally non-binding system of cooperation and coordination among financial regulatory bodies. 105Financial agencies are required to collaborate and cooperate for two reasons. 106The first reason is for financial stability, and the second is to ensure the efficient implementation and operation of the Twin Peaks model. 107The act mandates collaboration and cooperation at various levels, including at the cabinet and forum levels. 108It provides that financial sector regulators must cooperate and collaborate with the SARB and with each other to maintain, protect and enhance financial stability. 109The FSR Act 110 also obliges the SARB to cooperate with other financial regulators in order to fulfil the financial stability mandate of the central bank effectively.The PA is required to collaborate with and assist the SARB, the Financial Stability Oversight Committee, the FSCA, the NCR, and the Financial Intelligence Centre in order to achieve its objective of financial stability. 111In addition to this, the FSR Act 112 requires financial regulators to collaborate with each other when performing their functions in terms of financial sector laws such as the National Credit Act 113 and the Financial Intelligence Centre Act. 114Further, the section obliges financial regulators to inform each other and share information about matters of common interest. 115The Act also establishes the Financial Sector Inter-Ministerial Council to facilitate the cooperation and collaboration among Cabinet members who administer legislation relevant to financial sector regulation and supervision.To give effect to this, the FSR Act, like the APRA Act in Australia, requires financial regulators to enter into one or more MOUs. 116The enforcement of MOUs is a challenge, however, because they are generally non-binding and do not carry the same weight as formal contracts.In the case of South Africa's non-compliance with a MOU does not affect a regulator's actions under the FICA and NCA. 117Accordingly, it can be argued that South Africa has modelled its collaboration and cooperation mechanism on the Australian approach, which also utilises non-binding MOUs between various regulators.The two countries have adopted an ineffective method of the use of MOUs to foster cooperation and collaboration among financial regulators.
It is also important to note that both Australia and South Africa employ a risk-based regulatory system in their implementation of the Twin Peaks model.Such a system provides for the development of decision-making frameworks and procedures to prioritise regulatory activities and the deployment of resources, principal inspection and enforcement activities. 118hat prioritisation is determined by an assessment of the risks that regulated institutions and companies pose to the regulator's objectives. 119It is submitted that a risk-based approach is a problem-based regulation where regulators "should focus on the most important problems and fix them." 120Thus, priority should be given to high-risk problems and relevant resources should be provided to the regulatory bodies to curb such problems.The riskbased approach also stipulates that financial institutions should take enhanced measures to manage and mitigate high risks while less intensive measures are recommended for low risks.Therefore, a risk-based approach has a two-stage inquiry.The first stage involves the identification of the level of risk and the second stage entails acting on the high-level risk.
The FSR Act requires the PA to consider the need for a pre-emptive, outcomes-focussed and risk-based approach so as to prioritise its resources in accordance with the significance of risks to the achievement of its objectives. 121The FSCA is also required to employ a risk-based approach in achieving its objectives.In other words, the risk-based approach requires the FSCA to identify key risks to the achievement of its objectives and prioritise the resources according to the significance of such risks. 122The regulatory strategy of the FSCA entails that the objective of the risk-based approach is to pre-emptively identify and assess risks, including cross-sector risks, risks related to specific financial sub-sectors, and risks concerning individual financial institutions. 123It is also aimed at intervening on a timely basis when the governance, culture, or practices of the supervised financial institution are imprudent, unsafe and endangering the integrity of the markets to the detriment of consumers and market 117 Sections 26(4) and 77(3) of the FSR Act.participants.However, the FSR Act does not provide detailed guidance on how regulators should employ the risk-based approach.It is silent on the mechanisms and/or measures that should be employed when applying the risk-based approach in South Africa.
On the other hand, Australia has implemented a robust and detailed riskbased regulatory system to enforce its Twin Peaks model, which is enforced through the risk-based regulatory system which was modelled and shaped by the collapse of the HIH insurance company. 124The collapse of this company exposed weaknesses in APRA's risk assessment mechanisms and the absence of an effective supervision culture. 125To this end, the APRA modified its risk-based approach to financial regulation in 2002 by introducing a system consisting of two elements, namely the Probability and Impact Rating System (PAIRS) and the Supervisory Oversight and Response System (SOARS). 126The PAIRS is a framework for assessing how "risky" an institution is in relation to APRA objectives. 127It focusses on the probability and impact of the risks that a particular institution poses to APRA's objectives, namely, ensuring that financial institutions meet their obligations to beneficiaries in the context of an efficient and competitive financial sector. 128Under the PAIRS, financial institutions are ranked according to their ability to meet relevant financial commitments and their impact on the Australian financial sector should they be liquidated. 129The output of the PAIRS process is a risk score which is translated into a probability index rating and an impact score. 130The SOARS is designed to determine how officials respond to that risk.It consists of two components, namely the supervisory attention index and the supervisory stance. 131The supervisory attention index rating is determined by taking the geometric average of the probability rating and the impact index rating. 132This rating is intended to set the level of resources to be applied to the financial institution.
South Africa also adopts a risk-based approach to enforce the Twin Peaks model.developed a detailed risk management mechanism because it has not yet experienced the same level of instability that was experienced in Australia when HIH insurance collapsed.In South Africa, both the PA and the FSCA are mandated to utilise the risk-based approach in fulfilling their objectives.
On the other hand, only the APRA is obliged to rely on the risk-based approach to fulfil its functions in Australia.
The reliance on the risk-based approach empowers regulatory bodies to employ appropriate measures that are commensurate with the relevant risks that are posed to financial institutions.The risk-based approach requires resources to be directed where they are needed the most to curb high risk threats to the financial sector.If properly employed, the risked based approach reduces compliance burdens by minimising regulatory intervention where detected risks are relatively low. 133It is assumed that regulatory bodies may sometimes fail to respond to all alleged breaches or monitor all illicit conduct in the financial sector. 134This means that there are some risks that do not pose much threat to the financial sector.Regulatory bodies should carefully determine the severity of each risk.
In terms of the governance framework, individuals tasked with supervising the implementation of the Twin Peaks model are appointed in both South Africa and Australia.In South Africa, the Commissioners governing the FSCA are appointed by the Minister of Finance. 135The same is true for the PA, whose CEO is appointed by the Governor of the SARB. 136In Australia, the ASIC Commissioners and the APRA Chairperson and Deputy Chairperson are appointed by the Governor-General from the nominations of the relevant Minister. 137Nevertheless, there are some differences in terms of where these officials report.The APRA is to some extent answerable to the Treasurer, and both the APRA and the ASIC report to the Federal Parliament through the submission of Annual Reports. 138In South Africa, the PA reports to the deputy governor and is accountable to the SARB. 139The PA also prepares an annual report on its activities, which is submitted to the Minister of Finance for reporting in Parliament. 140The Governor-General has the power to terminate the appointment of an APRA member under the APRA Act.
The second phase comprised of harmonising financial sector legislation such as the Banks Act 94 of 1990, the Short-Term Insurance Act 53 of 1998 and the Long-Term Insurance Act 52 of 1998 with the Financial Sector Regulation Act 9 of 2017 (FSR Act) and developing conduct of financial institutions legislation.South Africa's Twin Peaks model established two separate regulators, namely the FSCA, which is the market conduct regulator, and the PA, which is the prudential regulator.However, unlike in Australia where these two separate regulators are governed by two different statutes, both the FSCA and the PA are established by the FSR Act in South Africa.
One striking difference between the implementation of the Twin Peaks model in South Africa and Australia is that in the latter, the regulatory model is implemented by two different statutes, while in the former, one statute is used.The ASIC Act and the APRA Act govern the implementation of the Twin Peaks model in Australia while the FSR Act governs the same in South Africa.88However,this difference does not have an effect on the implementation of the Twin Peaks model in these two countries.Moreover, there are no under-regulation or over-regulation challenges relating to thereliance on the Twin Peaks model in either South Africa or Australia.The FSR Act is the overarching statute enforcing the Twin Peaks model in South Africa and there has so far been no under-regulation and/or over-regulation of financial institutions in the financial sector.Thus, the financial sector is adequately regulated with the Twin Peaks model under the FSR Act.The FSR Act provides the necessary mechanisms to ensure compliance with the rules and standards of the Twin Peaks model.Similarly, even though two statutes underpin the Australian Twin Peaks, financial institutions are not subject to excessive, draconian or burdensome regulation.Both Australia and South Africa do not apply the model rigidly.Each of these countries has carefully adopted the Twin Peaks model with two peak regulators that have clear and distinct objectives on, inter alia, prudential regulation, market conduct regulation and consumer protection.In South 85Section 58(1)(a) of the FSR Act.86 Section 61 of the FSR Act; the FSCA 2022 https://www.fsca.co.za/Annual% 20Reports/FSCA%20Annual%20Report%202021-2022.pdf5-70.87 Section 63 of of the FSR Act.88 Qumba 2022 SALJ 96; see the preamble of the FSR Act.
Neither the CFR nor the content of the regulatory MOUs is prescribed by statute.102Thissoftlawapproach of employing MOUs is merely persuasive and not legally binding.Consequently, MOUs are non-binding measures that do not give rise to legal consequences for any non-compliance on the part of the offenders.103Therefore,there is poor enforceability of the relevant laws and related measures and/or soft law instruments.In this regard, it must be 101Van Niekerk and Van Heerden 2020 SALJ 130.
However, unlike Australia, South Africa does not provide robust riskbased assessment mechanisms.It appears that South Africa has not yet 141This is in contrast to the situation in South Africa, where the Minister of Finance has the power to appoint and dismiss the PA's Chief Executive Officer and is vested with the authority to remove FSCA Commissioner from office.142Thisapproachcouldresult in undue political interference in the functioning of the PA and FSCA.143Thus, the current South African Twin Peaks model does not expressly provide measures that prevent and combat undue political interference on the part of the PA and the FSCA.144Asnotedabove,South Africa and Australia have fundamentally sound designs of the Twin Peaks model.The model is not rigidly enforced in both South Africa and Australia.In this regard it has been carefully adapted to suit the socio-economic needs and circumstances of each country.In addition to the prudential regulator and market conduct regulator, the central banks still play a fundamental role in ensuring financial stability in both South Africa and Australia.Nonetheless, as indicated above, there are still some gaps and flaws that need to be addressed.Accordingly, it is submitted that the Twin Peaks model should be carefully enforced in both countries to foster cooperation and collaboration among the prudential regulator, market conduct regulator, central banks and other relevant role-players.Moreover, it is submitted that South Africa should consider following the Australian approach of not housing the PA in the central back to promote its independence and curb political interference.However, the complete separation of the prudential regulator from the central bank should be flexibly utilised so as not to discourage relevant cooperation and collaboration between the prudential regulator, the market conduct regulator, central banks and other relevant role-players.Both Australia and South Africa should carefully move away from the soft law approach of relying on MOUs to enact adequate statutory provisions in the APRA Act and the FSR Act to foster compliance, cooperation and collaboration between financial institutions, market participants, financial regulators and other role-players.This approach could ensure that all relevant parties are legally bound to comply with the MOUs and statutory provisions of the APRA Act and the FSR Act.The risk-based approach should be effectively utilised in Australia and South Africa to enforce the Twin Peaks model and detect, prevent and combat all systemic risks in their respective financial sectors.In relation to this, South Africa should consider amending the FSR Act to enact provisions that expressly and adequately stipulate the measures and/or mechanisms that should be used to implement its riskbased approach to enforce the Twin Peaks model.APRAdate unknown https://www.apra.gov.au/sites/default/files/MoU-RBA-Reserve-Bank-of-Australia.PDF Australian Prudential Regulation Authority date unknown Memorandum of Understanding: The Reserve Bank of Australia and the Australian Prudential Regulation Authority https://www.apra.gov.au/sites/default/files/MoU-RBA-Reserve-Bank-of-Australia.PDF accessed 16 April 2024 APRA 2016 https://www.apra.gov.au/sites/default/files/information-paperrisk-culture1.pdfAustralian Prudential Regulation Authority 2016 Information Paper: Risk Culture https://www.apra.gov.au/sites/default/files/information-paper-risk-culture1.pdfaccessed 16 April 2024 H CHITIMIRA AND L MAVHURU PER / PELJ 2024(27) 27 APRA 2018 https://www.apra.gov.au/sites/default/files/2018-02-pairsguide-ud-external_0.pdfAustralian Prudential Regulation Authority 2018 Probability and Impact Rating System https://www.apra.gov.au/sites/default/files/2018-02-pairsguide-ud-external_0.pdfaccessed 16 April 2024 APRA 2020 https://www.apra.gov.au/supervision-risk-and-intensity-srimodelAustralian Prudential Regulation Authority 2020 Supervision Risk and Intensity (SRI) Model https://www.apra.gov.au/supervision-risk-andintensity-sri-modelaccessed 16 April 2024 APRA 2021 https://www.apra.gov.au/sites/default/files/2021-01/Governance%20and%20Senior%20Executive%20Accountabilities.pdfAustralian Prudential Regulation Authority 2021 Governance and Senior Executive Accountabilities https://www.apra.gov.au/sites/default/files/2021-01/Governance%20and%20Senior%20Executive%20Accountabilities.pdfaccessed 16 April 2024 APRA 2023 https://www.apra.gov.au/apras-organisation-structureAustralian Prudential Regulation Authority 2023 APRA's Organisation Structure https://www.apra.gov.au/apras-organisation-structureaccessed 22 February 2024 APRA 2024 https://www.apra.gov.au/apras-functions-0Australian Prudential Regulation Authority 2024 APRA's Functions https://www.apra.gov.au/apras-functions-0accessed 16 April 2024 ASIC date unknown https://asic.gov.au/about-asic/what-we-do/ourrole/history/Australian Securities and Investments Commission date unknown History https://asic.gov.au/about-asic/what-we-do/our-role/history/accessed 26 Schmulow 2018 https://theconversation.com/south-africa-joins-the-clubthat-regulates-financial-markets-through-twin-peaks-95558 Schmulow 2018 South Africa joins the club that regulates financial markets through 'Twin Peaks' https://theconversation.com/south-africa-joins-theclub-that-regulates-financial-markets-through-twin-peaks-95558 accessed 16 April 2024 The South African Treasury 2013 https://www.treasury.gov.za/twinpeaks/20131211%20%20item%203%20roadmap.pdfThe South African Treasury 2013 Implementing a Twin Peaks Model of Financial Regulation in South Africa accessed https://www.treasury.gov.za/twinpeaks/20131211%20%20item%203%20roadmap.pdf14 October 2023 The Treasury 1996 https://treasury.gov.au/publication/p1996-fsi-frThe Treasury, Australian Government 1996 Financial System Inquiry (1996) Final Report https://treasury.gov.au/publication/p1996-fsi-fraccessed 14 October 2023 UNSW Law Society 2020 https://issuu.com/unswlawsociety/docs/pic_2020_final_copy/s/11207013 UNSW Law Society 2020 Prudential Regulation Authority (APRA) https://issuu.com/unswlawsociety/docs/pic_2020_final_copy/s/11207013accessed 15 February 2024 139Section 55 of the FSR Act.140 Section 55 of the FSR Act; IMF 2012 Country Report No. 12/314.the 142 Section 36 and 39 of the FSR Act.143 Osode 2021 Interdisciplinary Journal of Economics and Business Law 9-32.144 Osode 2021 Interdisciplinary Journal of Economics and Business Law 9-32. | 8,633.8 | 2024-09-10T00:00:00.000 | [
"Law",
"Economics",
"Political Science",
"Business"
] |
Foundations of Finsler spacetimes from the Observers' Viewpoint
Physical foundations for relativistic spacetimes are revisited, in order to check at what extent Finsler spacetimes lie in their framework. Arguments based on inertial observers (as in the foundations of Special Relativity and Classical Mechanics) are shown to correspond with a double linear approximation in the measurement of space and time. While General Relativity appears by dropping the first linearization, Finsler spacetimes appear by dropping the second one. The classical Ehlers-Pirani-Schild approach is carefully discussed and shown to be compatible with the Lorentz-Finsler case. The precise mathematical definition of Finsler spacetime is discussed by using the {\em space of observers}. Special care is taken in some issues such as: the fact that a Lorentz-Finsler metric would be physically measurable only on the causal directions for a cone structure, the implications for models of spacetimes of some apparently innocuous hypotheses on differentiability, or the possibilities of measurement of a varying speed of light.
Introduction
A plethora of alternatives to classical General Relativity has been developed since its very beginning. Many of them were motivated by the search of a unified theory which solved disturbing issues of compatibility with Quantum Mechanics (Kaluza-Klein, M-theory, quantum field gravity...) while, since the 90's, unexpected cosmological measurements led to further alternatives (cosmological constant, quintaessence, theories with varying speed of light...). However, the possibility to consider a Finslerian modification of GR has not settled in the mainstream of research and it has been scarcely considered in the literature until recent times (some examples are references [1,16,17,29,30,32,37,43,46,50,57,67,71,74,81]). Certainly, the generality of Finsler Geometry in comparison with the Riemannian setup (namely, analogous to the generality of the convex open subsets of an affine space in comparison with the ellipsoids) is a big drawback, as the number of new variables and parameters would seem immeasurable. Neverthelesss, this is similar to the generality of General Relativity in comparison with Special one (see Remark 6.1). Anyway, any Finslerian modification of General Relativity would mean to drop the beloved Lorentz invariance not only at global and local levels (as it occurs in General Relativity) but also infinitesimally, i.e. looking such an invariance as a limit symmetry around each event. However, from a fundamental viewpoint, this should not seem too strange: as physical measurements are always approximations, one would not be surprised if the symmetries of the models were only approximations to a more complex reality. Indeed, as we will explain, the existence of some symmetries among observers becomes a natural requirement in order to make direct measurements of space and time. There is no reason to assume that the physical reality will satisfy such requirements in an exact way -even though, certainly, the existence of such approximated symmetries are meaningful and useful for modeling.
In the present article, a physical motivation to consider Finsler spacetimes as models of space and time is developed, and quite a few of related ideas are discussed. We stress the following four guidelines.
1. Approach from the foundations viewpoint. We develop an approach for the foundations of the theories of spacetime starting at the observers viewpoint in Classical Mechanics and Special Relativity ( §2-4). Finsler spacetimes are shown to appear by dropping the symmetries of inertial observers in a natural way. Our approach follows the viewpoint in [9] by López and two of the authors in §2 and §3, which includes the celebrated ideas by V. Ignatowski [40,41,42] about the foundations of Special Relativity. 1 The reason relies on a classical result for any Finsler metric F : its square F 2 is C 2 at the zero section if and only if F comes from a Riemannian metric (see Remark 4.4 (1) and §5.2).
Indeed, even in the classical relativistic case only the future-directed causal directions for a cone C + determined by the metric g contains the elements physically measurable for any (true or gedanken) experiment. In Relativity, the Lorentzian scalar product g p at each event p is determined by its value on the cone C + p (or on its timelike directions); therefore, a Lorentz metric g can be determined on the whole T M even if, actually, only its value on C + can be measured. However, this is not by any means true for a Lorentz-Finsler metric L, where there is a huge freedom to extend the Lorentz-Finsler metric away from C.
So, our Lorentz-Finsler metrics will be defined only on a (causal) cone structure 2 .
(2) Smoothness, i.e., differentiability up to some appropriate order. Usually, such a requirement is regarded as a harmless macroscopic approximation to the structure of the spacetime. However, the discussion on EPS above shows that this is not so trivial in the Finslerian case. What is more, other issues appear in the literature: • The possibility that the cone is smooth and the Lorentz-Finsler metric is smooth only on the timelike directions but cannot be smoothly extended to the cone, which happens in metrics such as Bogoslovski in Very Special Relativity [12] and others [73], see §6.1. • The lack of differentiability outside the zero section of Finsler product spacetimes, which may lead to definitions of Finsler static spacetimes which are not smooth in the static direction [16], a fact which can be overcome with our approach to the space of observers, see §4.2 (item 5 (b)). 4. Importance of the space of observers. The relevance of the space of observers in Special and General Relativity, its links with the symmetries of the spacetime and the possibility to lift Relativity to this space have been stressed by several authors [34,36] in the framework of Lorentz violation and Lorentz-Finsler geometry. It is worth emphasizing that the essential role of this space appears explicitly along our development. In the initial discussion of the linearized models, we start with the set S of inertial frames of reference (IFR), which permits even signature changing metrics ( §3.1). However, once the symmetries of these models are dropped, only the space of observers O remains as physically meaningful (Definition 3.1). In a classical relativistic spacetime (M, g), O is just the submanifold Σ g ⊂ T M of all the g-unit vectors in the future timelike cone; thus, each Σ g p := Σ g ∩ T p M is a hyperbolic space in the tangent space T p M of each event p ∈ M . Breaking Lorentz symmetry at each p leads to regard Σ g just as a more general pointwise concave hypersurface Σ, which becomes then the indicatrix of a Lorentz-Finsler metric L (see Remark 4.13).
This observers' viewpoint allows one to use geometric methods recently developed in [50] which may have interesting physical applications such as: (a) going from g to L by perturbing the pointwise hyperboloids Σ g into pointwise concave hypersurfaces Σ (as suggested in §6.3, such a perturbation might be produced by the presence of matter/energy and lead to quantum consequences), (b) to avoid or to smoothen possible singularities in Σ and then in L (showing that known non-smooth physical examples can always be approximated by smooth ones), (c) to construct systematically any Lorentz-Finsler metric from a Riemannian and a Finslerian one or (d) to construct systematically static and stationary metrics (avoiding any problem of smoothability).
In our opinion, the previous ideas support strongly that Finsler spacetimes have become an exciting vast field to explore thoughtfully from both the physical and mathematical viewpoints.
The doubly linearized models
Next, we develop our approach for the foundations of the theories of spacetime. As a difference with the EPS approach, we will not assume postulates on the nature of the behavior of the physical objects which will be measured but on how we can measure those physical objects. A posteriori, if we are able to measure by using some sort of symmetry, the spacetime itself will be endowed with the geometric structure which codifies such symmetries.
The first step, to be developed along this section, considers the simplest symmetries for observers, common to both Classical Mechanics and Special Relativity. They will be regarded later as a (linear) idealization.
2.1.
Postulates. Let us introduce the approach to the theories of spacetimes following 3 [9] (a priori, this is non-quantum, even though quantum links will appear in §6. 3).
The physical considerations on the existence of inertial frames of reference are encoded in the following two postulates.
Postulate 1 (Linear approach to spacetime). The physical spacetime is endowed with a structure of affine space Aff on a real vector space V of dimension n = 4. Physical observers are able to construct a non-empty set S IFR of affine frames of reference (each one R = (O, B) composed by a point O ∈ Aff and a basis B of V ) which are called inertial frames of reference (IFR).
Thus, each IFR, R, provides an affine chart, i.e. a bijection ϕ : Aff → R 4 , ϕ(P ) = (t(P ), x 1 (P ), x 2 (P ), x 3 (P )), such that, given another IFR,R, the coordinate changeφ • ϕ −1 : R 4 → R 4 is an affine map. The first coordinate t of each IFR will be called temporal and the other three x i , spatial. 3 It is worth pointing out that [9] focuses on the viewpoint of General Relativity. So, the first postulate there is different to the one here. Our viewpoint was pointed in the reference [11] (written for a general audience in Spanish) and it is developed further here by introducing concepts such as apparent temporality (Theorem 2.4) or arguments as those on the varying of the speed of light.
The meaning of this first postulate is just that a linear approximation Aff to spacetime is being considered. The postulate also says that physicists will be able to construct some of the natural charts of the affine space Aff. The physical process to obtain such charts is not specified, even though the names temporal and spatial suggest the nature of their measurements.
Our second postulate, based essentially in von Ignatowski's [40], will ensure just that, when making measurements of the temporal coordinate (resp. when making measurements of the spatial coordinates), the viewpoint of two IFR's are interchangeable. This will be reflected by a requirement of symmetry in the corresponding charts. To understand this symmetry easily, let us discuss the bidimensional case n = 2. Let R,R be two IFR's with coordinates (t, x) and (t,x), resp. By Postulate 1: The interchangeability of the viewpoints of R andR will collect the following physical assertion: the temporal coordinatet (resp. the spatial coordinatē x) ofR measured by using the physical clock (resp. the rod) of R goes by as the temporal coordinate t (resp. the spatial coordinate x) of R measured by using the physical clock (resp. the rod) ofR. Mathematically, In dimension n = 4, interchangeability between the three spatial coordinates will also be imposed.
Postulate 2 (Time and spatial interchangeability). Let R,R ∈ S IFR be two IFR's. Then, their coordinates (t, x 1 , x 2 , x 3 ) and (t,x 1 ,x 2 ,x 3 ) satisfy: 2.2. Groups O (k) (4, R). The linear part of an affine change of coordinates from a first IFR, R, to a second one,R, will be called the transition matrix A from R toR. The second postulate implies that the transition matrices satisfy the condition (4) below, so, in order to obtain all the possibilities, one just needs to solve the following algebraic exercise.
Exercise 1. Let A ∈ GL(4, R) be a regular 4×4 matrix and A −1 its inverse. Write them by using boxes as follows: where a 00 ,ã 00 ∈ R, a h , a v ,ã h ,ã v ∈ R 3 ,Â,Ã are 3 × 3 submatrices, and the superscript t denotes transponse. Then, determine those matrices A satisfying:ã Such an exercise is solved in [9, §3] in full detail. Next, we will describe the main properties of its solutions 4 . Definition 2.1. Let S 1 = R ∪ {ω} be the circle regarded as the extended real line R * = [−∞, +∞] with +∞ identified to −∞ as a single point ω. For each k ∈ S 1 , consider the matrix and define the group O (k) (4, R) ⊂ GL(4, R) as follows: We will say that A ∈ GL(4, . Accordingly, two IFR's R, R ′ are k-congruent is so is its transition matrix. It is easy to check that any k-congruent matrix A is a solution of Exercise 1 as in this Remarkably, it will turn out that the converse holds except in very exceptional cases (detailed in [9,Prop. 3.1]). Indeed, these exceptional cases will be avoided by using very mild and natural conditions from both the mathematical and physical viewpoints (any of the hypotheses (1)-(4) in the main Theorem 2.4 below).
Remark 2.2.
(1) In the case k = 0, ω, the equality implies detA 2 = 1 trivially. What is more, this equality is equivalent to Then, the case k = ω becomes equivalent to taking the limit k → ω(≡ ±∞): (2) If A is k-congruent for two distinct values of k, then so it is for any k. Concretely, let k 1 , k 2 ∈ S 1 , from [9, Lemma 3.3] (see its part 1 and proof): where O(3, R) is the usual orthogonal group and . Now, the relevant solutions to our exercise can be easily described.
(2) If det A 2 = 1, then there exists k ∈ S 1 such that A is k-congruent, and either k is unique or it can be arbitrarily chosen in S 1 . (3) Let A 1 , A 2 ∈ GL(4, R) be k 1 -and k 2 -congruent, resp. If k 1 is univocally determined and by noticing that, from the algorithm to compute the inverse matrix: In particular, d = 0 ⇔ a = 0 and, then, det A 2 = 1. Therefore, this equality would follow by assuming additionally a > 0 (i.e., ∂t/∂t > 0 in (3)), which will correspond with the condition of apparent temporality in Theorem 2.4.
Proof. Assertion (1) follows from the sentence above [9, Lemma 3.3] (recall that, as explained at the beginning of the paragraph containing that sentence, incongruent means detA 2 = 1). For (2), the existence of k follows also from the paragraph above [9, Lemma 3.3] and the uniqueness from part 1 of [9, Lemma 3.3] regarding S p as a set of two congruent observers and A as the transition matrix between them or from Remark 2.2(2). Assertion (3) follows from part 1 of [9, Lemma 3.3] regarding S p as a set of three congruent observers with transition matrices A 1 , A 2 and, say, A 1 · A 2 (and its inverses). Then, all of them must be k ′ -congruent for some k ′ and, as k 1 was univocally determined, k ′ = k 1 .
Lemma 2.3 implies that, under minimal realistic hypotheses, any set S IFR of IFR determines (at least) one value of k ∈ S 1 . Mathematically, such realistic properties just ensure that det A = ±1, which would be related to the conservation of the volume. Such a property might also be postulated directly, nevertheless, there are other physically sound weak hypotheses that imply it.
In order to formulate such hypotheses, recall first that the set S IFR of IFR obtained from our postulates is rather arbitrary. For example, the unique restriction to its number of elements comes from S IFR = ∅; that is, one can remove arbitrarily some elements of S IFR (but not of all them) and this new set would satisfy the postulates 1 and 2 too. What is more, if there is some k ∈ S 1 such that S IFR is composed by (a small number of) k-congruent IFR's, one can enlarge S IFR by acting with the group O (k) (4, R) obtaining a bigger set S of compatible IFR's. Notice that if there were a second k ′ = k such that all IFR's in S IFR were k ′ -congruent, a different enlargement S ′ could also be obtained. These observations suggest the following construction. Given S IFR , define where each S α is a set of affine reference frames satisfying: (i) S α includes S IFR , (ii) the change of coordinates between any two elements of S α satisfies the formula (3) in Postulate 2, and (iii) S α is maximal (i.e., not included in a bigger set satisfying the previous conditions (i) and (ii)). Recall: (a) S * IFR (⊃ S IFR ) is determined univocally by S IFR , (b) physically, all the affine reference frames in S * IFR could be regarded as IFR's with the same status as those in S IFR , and (c) mathematically, one would expect that the transition matrices between all the pairs of elements of S * IFR had a more natural structure than S IFR . Theorem 2.4. Let S IFR be a set of IFR's (satisfying the Postulates 1 and 2). There exists k ∈ S 1 such that the transition matrix A ∈ GL(4, R) of each transformation of coordinates between two IFR's, R 1 and R 2 , is k-congruent for all R 1 , R 2 ∈ S IFR , whenever any of the following hypotheses hold: (1) Conservation of the IFR volume: det A = ±1, for any transition matrix A.
(2) Transitivity: if A is the transition matrix from a first IFR, R 1 ∈ S IFR , to a second IFR, R 2 ∈ S IFR , then there exists an IFR, R 0 , such that the transition matrix A from R 0 to R 1 is equal to A.
(3) Action by a group: the set of transition matrices A between elements of S * IFR (as in (5)) is a subgroup G of GL(4, R). (4) Apparent temporality: any transition matrix A between elements of S IFR satisfies a 00 > 0 (with a 00 as in Exercise 1; recall also the discussion at §2.4).
Moreover, the existence of such a k implies that the properties (1), (2) For the last assertion, let us check that, when such a k exists, then (3) holds. Indeed, one of the sets S α in the definition of S * IFR , name it S k , can be chosen such that the group O (k) (4, R) acts transitively on S k (just choose R ∈ S IFR and take all the affine reference frames R ′ with transition matrix A in O (k) (4, R)). So, when k is univocally determined for one pair of elements R 1 , R 2 ∈ S IFR , then S * IFR = S k and the hypothesis (3) holds with the group G = O (k) (4, R). Otherwise, k can be arbitrarily chosen by Lemma 2.3(2), then S * IFR = ∩ k∈S 1 S k and (3) holds with the group G = {±1} × O(3, R) (see Remark 2.2(2)).
2.3.
Linear models of spacetimes. Theorem 2.4 implies that, whenever one of its mild hypotheses (1)-(4) holds, the existence of a set S IFR of IFR's according to Postulates 1 and 2, selects a group G = O (k) (4, R) (or the intersection of all of them). As the spacetime was represented by an affine space Aff on a vector space V by postulate 1, this vector space (and then Aff) will be endowed automatically with the geometric structure invariant by G. Let us study each case.
(1) Case k ∈ (−∞, 0). By the definition of O (k) (4, R), V is naturally endowed with a Lorentzian scalar product ·, · 1 . Indeed, if R = (O, B = (e 0 , e 1 , e 2 , e 3 )) is any IFR, then the unique ·, · 1 such that B is an orthornormal basis for it, up to the normalization of its first vector | e 0 , e 0 1 | = √ −k, becomes independent of the chosen R. What is more, for k = −1, the group O (k) (4, R) is the Lorentz group; otherwise, O (k) (4, R) is conjugate to the Lorentz group. Indeed, putting k = −c 2 with c > 0, then I (k) = I (c) · I (−1) · I (c) , the inverse of I (c) is I (1/c) and Anyway, the spacetime of Special Relativity is obtained.
The kernel E of ±φ 0 is endowed with a scalar product ·, · E (being the elements (e 1 , e 2 , e 3 ) of B an orthonormal basis of ·, · E for each IFR). Then, E endowed with this scalar product is called the absolute space. Summing up, the spacetime of Galilei-Newton is recovered now.
In this case, the basis B = (e 0 , e 1 , e 2 , e 3 ) of each IFR contains the same first element e 0 , up to a sign. Choosing a sign, this vector defines the absolute rest observer. Thus, the kernel (annihilator) of ±e 0 in the dual space V * (that is, the subspace E * := Span{φ 1 , φ 2 , φ 3 } of B * for each IFR) is also independent of the IFR. It is naturally endowed with a scalar product ·, · E * so that, for each IFR, the set (φ 1 , φ 2 , φ 3 ) becomes an orthonormal basis. Summing up, an a priori aphysical dual of Galilei-Newton spacetime (with a completely analogous geometric structure) is obtained. (4) Case k ∈ (0, ∞). For k = 1, the group O (k) (4, R) is the Euclidean orthonormal group 6 ; otherwise, O (k) (4, R) is conjugate to this group. Indeed, reasoning as in the case k < 0, V is naturally endowed with an Euclidean scalar product ·, · 0 and any basis B of an IFR is orthornormal for ·, · 0 , up to the normalization of its first vector. Summing up, one obtains the a priori aphysical case when the full spacetime is endowed with a Euclidean scalar product, which is mathematically analogous to the Lorentzian one. (5) Case k ∈ S 1 non-unique. In this case, the group is G = {±1} × O(3, R) and, thus, the basis B and its dual B * for any IFR satisfy all the properties in the previous cases. In particular, choosing a sign, one has an absolute time T , an absolute rest observer e 0 (with T (e 0 ) = 1) and an absolute space (E, ·, · E ) whose dual space can be identified with (E * , ·, · E * ) defined in the case k = 0.
This case should be regarded as aphysical too 7 and, being obtained as a "degenerate" case of the previous ones, it will not be taken into account anymore.
2.4. Temporal models and interpretation of k = −c 2 . Taking into account the previous four models of spacetime which depend on a unique k ∈ S 1 , let us revisit the role of the hypothesis of apparent temporality in Theorem 2.4.
Recall that apparent temporality was enough to ensure the existence of k in that theorem. However, the Euclidean case k > 0 would not be excluded by this hypothesis, because the set S IFR of all the IFR's might contain "few" elements (so that only transition matrices A with a 00 = cos θ appeared for values of θ with cos θ > 0). Moreover, in the other three cases for k, the elements of S IFR would determine a time-orientation 8 under apparent temporality, but there would still be elements in S * IFR which would not match with the chosen time-orientation. However, when the case k > 0 is disregarded a priori (say, regarding it as aphysical), it would be natural to strengthen the hypothesis of apparent temporality into temporality, namely: all the transition matrices between pairs of elements of S * IFR in (5) have a 00 > 0. This requirement not only would exclude the group O (k) (4, R) for k > 0 but it would also imply a restriction on the group for the other cases. This discussion makes natural the following definition and convention.
Definition 2.5. The linear models of spacetime with k ∈ (−∞, 0) ∪ {ω, 0} will be called temporal models. When only these models are considered, we will assume that apparent temporality also holds and, then, the following convention of temporality can be assumed with no loss of generality: (a) The temporal models are time-oriented. (b) All the elements in S IFR are assumed to lie in the chosen timeorientation.
(c) S IFR is assumed to be maximal for the property (b). Thus, depending on the value of k, the orthochronous subgroup of the Lorentz (or conjugate to Lorentz), Galilean or dual Galilean group will act freely and transitively on S IFR .
(d) When there is no possibility of confusion, S * IFR is regarded as equal to S IFR in (c).
For temporal models, given a transition matrix A which gives the coordinates (t,x j ) forR from the coordinates (t, x j ) of R,the velocity and speed of R measured byR are, resp., 7 Anyway, it would represent the model of space and time which goes back to Aristoteles. Recall that in that model, one would assume not only the existence of the absolute space and time but also that, for any P ∈ Aff, there exists a physical observer at P at absolute rest. This would determine the affine line P + R · e0, which would be regarded as a "space point at any time". 8 That is, a choice of one of the two timelike cones when k < 0 and one of the two choices of absolute time or absolute rest observer when k = ω, 0, resp. in the notation of Exercise 1 (see also [9, §5 (2)]).
Remark 2.7. In principle, it is appealing to call c the speed of light. Notice, however, that there is no mention neither to Electromagnetism nor to any other interaction in our approach. Nevertheless, an essential property of electromagnetism can justify that name. Namely, light is described by a wave which propagates in vacuum. An obvious natural hypothesis for IFR's is that the vacuum is "equal" for all of them, and, so, any physical scalar quantity measured with respect to the vacuum must yield the same number for all of them. In particular, this would mean that all IFR's must measure the same speed of propagation of the light with respect to the vacuum. As the supremum c is the unique speed equal for all of them, the following definition is justified. Anyway, the following digression about the physical content of this definition may be worthy. If one considered another interaction which also propagated in vacuum (say, gravitation) then the arguments in Remark 2.7 would imply that its speed of propagation c ′ with respect to vacuum would be the same c as for light. As emphasized by some authors, see [31], there is no logical contradiction assuming that c = c ′ and, thus, this question becomes an experimental issue 9 . In the affirmative, these different interactions might allow one to construct different types of clocks and rods in order to measure the temporal and spatial coordinates. So, the name IFR should include the interactions which allow Postulates 1 and 2 to hold.
First non-linearization
General Relativity can be regarded as a first non-linear generalization of Special Relativity. Such nonlinearity comes from the fact that Postulate 1, namely, the global affine character of spacetime, is being dropped and the set of all the events is modeled by a manifold. Nevertheless (as apparent from [9]), Postulate 2 would make still sense if the symmetries stated there are regarded just as infinitesimal ones, at the tangent space of each event.
This idea is well-established in the Lorentz case and it may seem very speculative in the other linear models of spacetimes. However, this will be developed briefly along this section with a double aim: on the one hand, the role of observers will be emphasized and, on the other, the framework of further issues relevant to the Lorentz-Finsler case will be settled. Only in §4 we will focus on the Lorentz case and will go beyond, in order to reach the Lorentz-Finsler generalization.
3.1. General case and signature change. Assume now that the spacetime is described by a (smooth, connected) manifold M and that our postulates are regarded as infinitesimal requirements of symmetry at the tangent space T p M of each p ∈ M , that is, around each event p ∈ M , one can find a set of coordinate charts such that the relations (3) occur only at p, namely, considering normal coordinates.
Then, we will have a set S p of linear bases at each T p M which will play the role of (linear) IFR's at p. For simplicity, we will assume in what follows: (i) S p determines univocally some k(p) ∈ S 1 (i.e., the degenerate case of non k-congruent solutions of Exercise 1 is skipped), (ii) S p is maximal (i.e., S p = S * p , consistently with the discussion above Theorem 2.4), and (iii) Consistently with Definition 2.5, the convention of temporality will be assumed whenever k(p) ∈ (0, ∞) (in particular, the notion of future-directed timelike vectors makes sense then).
Moreover, as an extra hypothesis (or third postulate, as in [9]) we assume: (P3) S p varies smoothly in the bundle LM of linear frames 10 of M . Formally, this means that S := ∪ p∈M S p is a smooth bundle embedded in LM (in the sense of a submanifold of LM with the induced topology such that the projection on M is a submersion) so that the function k : M → S 1 becomes smooth.
In general, one obtains then a signature changing metric g which is Lorentzian (resp. Riemannian) in the set −∞ < k < 0 (resp. 0 < k < ∞). Following the terminology in [9,10], in the closed subset determined by k = ω, one has a Leibnizian structure, that is, a non-vanishing 1-form Ω (absolute time form) on M and a Riemannian metric h in the subbundle ker(Ω) of T M , being then (ker(Ω), h) the absolute space 11 . Analogously, the region k = 0 is endowed with an anti-Leibnizian structure, consisting in a non-vanishing vector field W (absolute rest field) on M and a Riemannian metric h * on the subbundle ker(W ) of the cotangent bundle T M * .
Let g be the semi-Riemannian (Lorentzian or Riemannian) metric in the region k = 0, ω and g * the (physically equivalent) metric induced in the cotangent space. It is worth emphasizing that, in the region k = 0, g can be extended as a degenerate metric and g * cannot; however, g * matches smoothly with h * on ker(W ). Analogously, in the region k = ω, g * can be extended as a degenerate metric, while g matches smoothly with h on ker(Ω).
Summing up, this first non-linear generalization of the IFR setting yields as a general model of spacetime a geometry governed by the smooth function k. Whenever k = 0, ω, a semi-Riemannian metric g and its equivalent dual metric g * are obtained; in the regions k = 0 or k = ω either the metric g or g * are extended as a degenerate metric and additional geometric structures appear 12 . The transition among these elements is smooth, as so is S. The transition from Lorentzian to Riemannian through a region with k = ω can be easily understood by looking at O (see Fig. 1). We will not be interested in the transition through a region with k = 0. However, this could be described in a completely analogous way by defining a dual space of observers (constructed by picking at each point p the first element of the elements in the bases which are dual to those in S p ).
3.3.
Pointwise variation of speed of light. In the region −∞ < k < 0 the function c(p) = |k(p)| might be understood as a variation of the speed of light with the point at M . Such a possibility has been speculated since the beginning of General Relativity, and was put forward in the 90's in relation to cosmological inflation and the horizon problem (see for example [3,6,69,72], as well as some criticism in [22,84]). In order to avoid the circularity of using the light to define the units to measure its speed, Barrow and Magueijo [7] argue that only the variation of adimensional constants would have a true physical meaning; so, the variation of c should be regarded as a variation of the (adimensional) fine structure constant α = e 2 / c4πǫ 0 .
Without deepening into these questions, some comments about varying c in our framework are in order. Recall first that, in the affine case obtained 12 Recall that models of signature changing metrics have been studied at least since the influential "no boundary" proposal by Hartle & Hawking [35], see for example [19,87]. Moreover, the existence of an "absolute time" in the transition region has also been pointed out by several authors [52, §2] (see also [85]).
The space of observers changes from a hyperbola to a line and to a circumference.
by assuming Postulate 1, to assume also Postulate 2 would imply that all the IFR's would be using the same units for measurement and, then, the specific value of c would depend of the chosen units. Indeed, the natural interpretation of the group O (k) (4, R) for k = −c 2 ∈ (−∞, 0) is just the Lorentz group in some appropriate coordinates. Thus, if one regarded the affine space Aff as a manifold and took different units at each point, then this could not be interpreted as a variable speed of light.
To measure a varying speed of light would rely on the possibility to compare the units of measurement at different points. In the affine case such a comparison would be possible if the interactions were invariant by translations (an unlikely possibility). In general, one would need measurements involving magnitudes which are dimensionally independent (in particular, this would be achieved by measuring adimensional constants, as commented above). In principle, this might be achieved by measuring essentially different interactions, as in the case of light and gravity propagation (see below Definition 2.8) 13 . Anyway, as we will see, the Lorentz-Finsler viewpoint will open other possibilities by using infinitesimal anisotropy.
Relativistic vs Leibnizian structures.
A priori, the cases −∞ < k < 0 and k = ω (with constant k, and then c) are the physically interesting ones, either as a model of spacetime or as an approximation to this model. Next, they will be briefly compared.
In the first case, we will assume k = −1 on all M , after our discussion in the previous subsection. So, one has a time-oriented Lorentzian metric g and O is a fibered space on M with fiber the hyperbolic space. Moreover, the Levi-Civita connection ∇ g is canonically associated with g, and any other affine connection ∇ parallelizing g (i.e., satisfying ∇ µ g νρ = 0) must be nonsymmetric. This selects ∇ g and provides a sense of free fall and lightlike geodesics (compare with EPS later).
In the case k = ω, the Leibnizian structure consisting in the absolute time form Ω and the absolute space (ker(Ω), h) on M described in subsection 3.1 is obtained. These structures were studied systematically in [55] and [10]. In this setting, one considers Galilean connections, that is, affine connections ∇ which parallelize both, Ω and h. It is worth pointing out that the set of all the Galilean connections has the same degrees of freedom as the set of all affine connections (symmetric or not) parallelizing a Lorentzian metric g. However, a symmetric Galilean connection will exist if and only if Ω is closed (dΩ = 0), that is, locally Ω = dt for some function t. Nevertheless, in this case, there is no univocally determined symmetric connection. Moreover, this happens even if Ω is exact, that is, Ω = dt for some function t defined globally on M , which will be called the absolute time (t is unique up to an additive constant). Indeed, an explicit Koszul-type formula reconstructs all the symmetric Galilean connections in terms of two data 14 [10, Cor. 28], namely: the gravitational field (a vector field in the absolute space, that is, a section of the bundle ker(Ω) → M ) and the vorticity (a skew symmetric 2-form on the vector bundle ker(Ω)).
In conclusion, relativistic spacetimes are preferred to Leibnizian ones from the viewpoint of foundations, because of two celebrated properties: (a) they permit to model a finite speed of propagation in vacuum (recall that observers appear now at each event as infinitesimal approximations to IFR's and, so, the arguments in Remark 2.7 apply), and (b) they select a unique affine connection in the set of all the connections parallelizing the geometric structure, while Leibnizian ones require the gravitational and vorticity fields as an extra input.
In the next section, we will focus only on relativistic spacetimes and the Finslerian extensions. Nevertheless, some previous elements serve as a background for the Lorentz and Lorentz-Finsler cases and they can be compared a posteriori with them (see Table 1). We point out a pair of them so that the interested reader might come back here later: (1) Leibnizian structure (Ω, h) vs cone triple (Ω, T, F ) (which is useful to define and to handle any cone structure C, Lorentz or Lorentz-Finsler, see Definition 4.6, Remark 4.7(2)). Notice that when F comes from a Riemannian metric h, then the Leibnizian structure can be regarded as a sort of limit when λ → ∞ of the triples (λΩ, T /λ, F ), which "open" the cone C.
(2) Chronometric vs EPS approach to spacetime ( §5). The Leibnizian structure (Ω, h) (eventually, with Ω = dt) gives a chronometric approach to spacetime, in a similar way as the Lorentz metric does in Relativitiy. However, the former requires an additional input (an affine connection) in 14 Such a formula can be extended to include non-symmetric connections by adding as a third datum a suitable component of the torsion, see [10,Th. 27]. order to define free fall. So, the EPS approach (at least the axioms which do not consider light propagation) might also be interesting in the Leibnizian case. In contrast, the Lorentz-Finsler metric L will provide timelike and lightlike geodesics in a very similar way as the Lorentz metric g, in spite of the differences between the Levi-Civita ∇ g and the anisotropic connection (see footnote 21) determined by L. Note 3.3. Newton-Leibniz controversy. To end this section it is worth pointing out that the notion of Leibnizian structure provides a precise mathematical description of a historical controversy between Leibniz and Newton. Roughly speaking, Leibniz criticized Newton's arguments about IFR's by pointing out that the Euclidean space perceived by an observer is equal even after a rotation of the observer's coordinates. So, he claimed that one could not detect whether these axes are being rotated at different times. Newton replied that spinning water in a bucket would detect whether the observer is rotating or not. From the mathematical viewpoint, Newton was using the structure of a Galilei-Newton spacetime, as described in §2.3 (that is, the linear quadratic classical space + time approximation in Table 1). So, the overall affine structure of the (four dimensional) spacetime yields a natural affine connection, which can be used to detect rotation. Leibniz, however, is considering physical spacetime only as a manifold endowed with a Leibnizian structure (that is, he drops the spacetime affine structure and considers only the pointwise quadratic first nonlinear space + time in Table 1). So, with these elements, no affine connection is determined, and rotation cannot be measured. Summing up, Leibniz was right pointing out that, only with the Leibnizian structure on M at hand, no Galilean connection is selected 15 . However, Newton did select such a connection by guessing the further affine structure of M .
Second non-linearization
4.1. Background: norms, cones and Lorentz-Finsler metrics. In order to show rigorously the emergence of the notion of Finsler spacetime, some purely geometric elements are stressed first. Even though some of them are elementary, they will be necessary to make precise discussions. So, the experimented reader can skip some parts and come back when necessary.
The first ones come from classical norms on a (finite-dimensional, real) n-vector space V and Finsler Geometry; they are carefully explained in [48]. Table 1. Classical models of non-quantum space and time (linear models and their non-linearizations) (1) Positive homogeneity. This requirement only for λ > 0 enhances the applications of Finsler Geometry 16 , and it will be enough for our purposes. Positive homogeneity implies that F 0 is univocallly determined by its indicatrix (unit sphere) Σ 0 := F −1 (1). In particular, the full homogeneity of F 0 becomes equivalent to the symmetry of Σ 0 with respect to the origin. (2) Smoothness. The standard definition of norm implies that they are only continuous. We assume smoothness (say, C ∞ , pointing out the cases when lower regularity becomes relevant) away from 0. Using (ii), this is clearly equivalent to the smoothness of Σ 0 . . Recall also that, clearly, any norm coming from a (Euclidean or Lorentzian) scalar product is determined by its value on a cone domain. (1) 2-homogeneity. Taking F 2 instead of F , Finsler metrics can be defined alternatively as positive 2-homogeneous functions (this will be convenient for their Lorentzian extensions). What is more, then the C 2 -smoothability of F 2 at 0 would imply that it comes from a Riemannian metric (recall Remark 4.2 (5)).
a Finsler metric can be defined alternatively as a smooth hypersurface Σ embedded in T M satisfying appropriate conditions, namely: (a) Σ intersects transversely 17 each T p M and (b) this intersection Σ p := Σ ∩ T p M is a strongly convex compact connected embedded hypersurface whose inner domain B p (such that Σ p = ∂B p , where ∂ denotes the boundary in V ) 18 contains the zero vector 0 p . (3) Fundamental tensor on a vector bundle. Each F p defines a fundamental tensor field on T p M \{0} and, so, a 2-covariant tensor on each fiber of the (slit) tangent bundle π : T M \ 0 → M . We will use the letter g to denote such a tensor field, so that, for each v ∈ T M \ 0, g v will be a tensor on T p M , being p = π (v). Clearly, the definition of Finsler metric and fundamental tensor can be extended to any vector bundle, not necessarily the tangent one.
The rest of elements involves the Lorentz-Finsler case, and we follow [46]. We start with the definition of cone. For our purposes, the next one is enough. A more intrinsic definition can be seen in [50, Def. of Causality of spacetimes such as the chronological ≪, strict causal <, causal ≤ and horismotic → relations (p → q when p < q and p ≪ q) and, thus, the chronological/causal futures and pasts of a point, I + (p), I − (p) / J + (p), J − (p). In particular, cone geodesics are defined as locally horismotic curves, and they generalize the future-directed lightlike pregeodesics associated with the conformal structure of any Lorentz metric.
In the following, we will say that a function is smooth in a manifold with boundary (contained in a regular manifold M ) if it can be (locally) extended to a smooth function on an open subset of M . (2) Two homogeneity for L 0 is preferred to 1-homogeneity because of the general equality L 0 (v) = g v (v, v). Notice also that the Lorentzian signature is changed with respect to previous sections and, consistently, if L 0 is smoothly extended around any v ∈ C 0 , then L 0 must become negative away fromĀ 0 . (3) L 0 is determined by its indicatrix Σ 0 = L −1 0 (1), which is now strongly concave and asymptotic to C 0 . Indeed, a Lorentz-Finsler metric could be defined alternatively as a strongly concave hypersurface Σ 0 in A 0 which is asymptotic to some cone structure C 0 under the mild technical condition that the map extend smoothly to C 0 with non-degenerate 19 g. (4) All the properties related to the triangle inequality in the positive definite case (which were associated with the convexity of the indicatrix and held for conic Minkowski norms, Remark 4.2 (3)) are automatically translated now as reverse triangle inequalities in the Lorentz-Finsler case (associated with the concaveness of Σ 0 ). 19 These conditions would be satisfied by hypersurfaces suitably C 2 -close to the space of observers O of any relativistic spacetime (notice that some issues appear involving the extendability of L to the cone and whether the cone is prescribed or not), and they can be constructed for any cone (recall Rem. 4.11(4) below). to an open conic subset A * 0 which containsĀ 0 (recall that 0 / ∈Ā 0 ). This extension is far from unique, but the fundamental tensor in the boundary is well-determined. Remark 4.11. The following results on Finsler spacetimes will be relevant: (1) Any Lorentz-Finsler metric can be extended to T M \ 0 as a smooth 2-homogeneous function with fundamental tensor g of Lorentzian signature, see [67]. However, such an extension is highly non-unique and, as we will see, it is not justified by direct measures of observers.
where π 2 : T M = Span(T ) ⊕ ker(Ω) → ker(Ω) is the natural projection. G satisfies all the required properties of L except the differentiability on Span(T ), the latter because of the lack of differentiability of F 2 at 0 when it is not Riemannian. Indeed, the indicatrix G −1 (1) ⊂ A is not smooth precisely on T , that is, only at the point T p on each p. However, standard techniques of smoothability for convex functions allow one to smoothen G around T obtaining the required L [50, §5.2]. (5) The lack of differentiability of G above is analogous to the wellknown lack of differentibility of any product of (non-Riemannian) Finsler manifolds. Indeed, if (M 1 , F 1 ) is a Finsler manifold then dt 2 ⊕ (±F 2 1 ) are not smooth as Finsler or Lorentz-Finsler metrics on 20 Recall that, for any function L0 onĀ0 ∪ {0} ⊂ V (with A0 a cone domain), the elementary definition of existence of a differential map at 0 makes sense because 0 is an accumulation point of the domain of L0 and its uniqueness is guaranteed because A0 contains n independent directions converging to 0. R × M 1 along the direction ∂ t . This problem prevents the extension to the Lorentz-Finsler case of the trivial procedure to construct a relativistic product spacetime starting at a Riemannian manifold. (6) Given a Lorentz-Finsler metric, there exists a univocally determined A-anisotropic connection which is torsion-free and parallel. Moreover, when we consider a properly Lorentz-Finsler metric, this Aanisotropic connection can be extended to an open subset A * which containsĀ \ 0. As the extension away fromĀ is highly non-unique, we will speak aboutĀ-anisotropic connections. When A = T M \ 0, we will just say anisotropic connection 21 .
Due to this last item, the definitions of some classes of Finsler spacetimes such as the static ones have included the possibility to have some non-smooth directions [16,17,58]. However, the smoothing procedure mentioned in part (4) is also applicable to these cases. This shows that, from the foundations viewpoint, the motivation for non-smooth metrics is not stronger for the Lorentz-Finsler case than for classical relativistic spacetimes [50, §4.2, §4.4]. (i) its cone C is smooth (so, the cone geodesics are well defined), and (ii) the A-anisotropic connection can be smoothly extended to C (so, the Finslerian curvature tensors are well-defined on C) .
Such examples could also be included in our definition of Lorentz-Finsler metrics and spacetimes, as all the relevant geometric properties remain. However, we will consider for simplicity that L is smooth at C and, when this property does not hold, we refer to them as improper and we will discuss whether (i) and (ii) hold then. Accordingly (and consistently with [50, Def. 3.1]), an improper Lorentz-Minkowski norm L 0 satisfies all the properties in Def. 4.8 but the differentiablility at L −1 0 (0). Remarkably, a large class of spacetimes satisfying both conditions (i) and (ii) can be found following [73]. Namely, they hold for any two-homogeneous function L defined on the set of causal vectorsĀ determined by a cone structure C such that: (a) L is zero on C and determines a Lorentz-Finsler metric in the interior A ofĀ and (b) there is a power of L which is smooth on C with non-degenerate Hessian (notice that, in [73], L is assumed to be defined on the whole T M ).
Anyway, there are some examples of Finsler spacetimes in the literature that do not even satisfy our weaker definition of improper Lorentz-Finsler spacetime, such as Randers spacetimes or those introduced by Kostelecky [53,54], which are the effective model of some particles with no GR background (see the discussion in [50, Appendix A, B]). 21 Essentially, this is a connection where, formally, the Christoffel symbols of a chart (U, ϕ) depend also on the direction and, so, they are functions on T U ∩ A ⊂ T M \ 0, which are positive homogeneous of degree zero. The name and a thorough study of Aanisotropic connections were given in [44,45]; see also [63,64] for a study of connections on fiber bundles from a more general viewpoint.
4.2.
Physical intuitions for Finsler spacetimes. Next, our aim is to justify physically our definition of Finsler spacetime (Def. 4.10), supported by some mathematical properties pointed out above. The first consideration is that Postulate 2 should be regarded now as an approximate symmetry at each point, in a similar way as the affine structure of Postulate 1 has been regarded as an approximate symmetry to the structure of a relativistic spacetime 22 . This means that, now, one cannot find a set of coordinate charts such that the relations (3) occur at each p; however, one would expect that we will not be far from this situation (at least in regions of spacetime free of extremely exotic or violent situations). Consistently, we will not have the sets S p of linear bases at each T p M playing the role of (linear) IFR at p. However, one would expect that the set of observers O introduced in Def. 3.1 will still make sense and will be "close" to the space of observers for a relativistic spacetime. As the latter is a hyperboloid (asymptotic to a quadratic cone) at each point p, now, O p should be a strongly concave hypersurface asymptotic to some cone structure defining a Lorentz-Minkowski norm at p (see Rem. 4.9 (3)) and, moreover, O should be identified as the indicatrix Σ of a Lorentz-Finsler metric L.
Remark 4.13. The previous discussion leads us to a Lorentz-Finsler metric L with indicatrix Σ equal to O which lies exactly under our Def. 4.10 (including also the improper case explained in Note 4.12). The way to arrive at this definition from the viewpoint of symmetries can be summarized as follows.
(1) Following [34], consider the connected parts of the identity ISO 1 (4), SO 1 (4), ISO(3), SO(3) of the Poincaré, Lorentz, Euclidean and orthogonal groups, resp. In Special Relativity, the homogeneous spaces obtained as the quotients ISO 1 (4)/SO 1 (4), ISO 1 (4)/ISO(3), ISO 1 (4)/SO(3) are, respectively, the spacetime, the space of all the (rest) spaces (i.e., the space of all the spacelike hyperplanes, being the standard rest space ISO(3)/SO(3)) and the space of observers O (being the space of velocities SO 1 (4)/SO (3)). Here, O is metrically identifiable with R 4 × H 3 + . (2) In General Relativity, O is identified with the set Σ g of all the futuredirected unit vectors. Σ g is a subbundle of T M whose fibers are affine hyperboloids at each tangent space. Such hyperboloids characterize g univocally so that the information of g is codified in O.
(3) For the space of observers O in the Lorentz-Finsler setting, Σ g is replaced with a hypersurface Σ satisfying formal properties analogous to Σ g (but dropping its pointwise symmetries) so that it characterizes a Lorentz-Finsler metric.
Next, let us discuss more carefully the physical grounds of Def. 4.3: (1) The fact that L is defined only on a cone domain A and it is extended continuously to C comes from the nature of the space of observers. Recall that, then, one has timelike geodesics (Rem. 4.11 (2)) and, thus, freely falling observers. At least from a trivial mathematical 22 Even though we focus on the relativistic case, (disregarding the Leibnizian case and the other possibilities), one could also consider a Leibniz-Finsler structure (Ω, h) on a manifold M , where h would be now a Finsler metric on Ker(Ω) instead of a Riemannian one, according to Table 1. viewpoint, this is enough to determine L and, then, the fundamental tensor g on the cone domain A.
Notice that, given an observer v ∈ Σ p , the tensor g v is then also obtained on the directions of T v Σ p . In principle, g v (which can be obtained just from Σ) could be measured, as it comprises properties of neighboring observers.
(2) The smooth extensibility of both L and the fundamental tensor g (as a non-degenerate one) to the cone structure C appears as a natural approximation (in principle, one would expect to remain close to the situation in a relativistic spacetime) which mathematically ensures that C is truly a cone (with S 0 in Def. 4.5 satisfying strong convexity). Moreover, then L also determines lightlike geodesics which, up to reparametrization, are inherent to the cone structure. The improper case of Finsler spacetimes satisfying the properties (i) and (ii) in Note 4.12 would also satisfy all these properties about geodesics and cones. Then, as a consequence, the behaviour of lightlike geodesics becomes completely analogous to the classical relativistic case. Indeed, Lorentz-Finsler metrics with the same cone structure are also related by an "anisotropic conformal factor µ" (Rem. 4.11(3)) and the cone structure C also allows one to mimic the relativistic behaviour of Causality (Rem 4.7(3)).
(3) The physical considerations in the two previous items are also assumed in standard Relativity. Namely, observers are always expected to measure only massive or massless particles, that is, elements with velocities in a causal cone. In General Relativity, this is apparent from the EPS formulation, where radar coordinates are systematically used with this aim (see the next section). Certainly, the metric tensor g is assumed to be defined on all the directions in the relativistic case but the underlying reason is that g is fully determined by its value along the causal vectors (Rem. 4.2 (5)). This is not by any means true in the Lorentz-Finsler case, even if L can be extended to the whole T M (recall Rem. 4.11 (1)). (4) When a spacelike separation in a direction l is going to be measured by an observer v, it seems natural to consider g v (l, l); so, it would be irrelevant whether L is not defined outside the cone. Indeed, from a purely geometric viewpoint, T v Σ p would be naturally regarded as the rest space of the observer v at p, and g v would be the unique metric available there, even though the physical process to measure it might not be obvious. It is worth pointing out here Ishikawa's claim in [43] that g v (l, l) can be measured assuming that the physical light rays are those of g v . Indeed, this author criticizes Beem's definition of light rays, who constructed them by using the lightlike vectors on the cone C. Anyway, in our opinion, Ishikawa's claim needs further physical support. (5) It is worth emphasizing that no issue on smoothability occurs with Σ, which can be assumed smooth (as in Rem. 4.2 2) in most interesting cases. Indeed: (a) The Lorentz-Finsler metric L cannot be C 2 -extended to 0, in agreement with the behavior of norms in both the positive definite case and the Lorentz-Finsler one (Rem. 4.2(5)). However, no physical nor mathematical reason seems to require the smoothability of L at 0 (compare with the EPS approach in §5.2.1 below).
(b) Product metrics −dt 2 + F 2 or, with more generality, the rough Lorentz-Finsler version of static spacetimes −Λ(x)dt 2 + F 2 (x, y), with natural coordinates (x, y) at T M , are never smooth at ∂ t whenever F is Finsler but not Riemannian. Consequently, some authors have included the possible existence of non-smooth directions as a fundamental ingredient of Lorentz-Finsler metrics (see for example [16,17,58]). Nevertheless, as explained in Rem. 4.11, parts (4) and (5), general smoothing procedures can be applied. What is more, a natural definition of (smooth) static spacetimes as well as an explicit procedure to construct locally all of them are available at [50, §4.2].
(c) Other issues of non-smoothness appear when modelling some specific physical situations (Very Special Relativity, birefringence) and will be considered in §6.1.
Comparison with Ehlers-Pirani-Schild approach
5.1. Summary of the approach. EPS approach [21] constructs step by step each geometric structure of physical spacetime (until reaching the metric) by means of physically motivated axioms: (1) Spacetime becomes a differential manifold M endowed with a cone structure C. Essentially, this is obtained by means of axioms on light propagation which involve messages and echoes between particles. Indeed, these axioms allow one to find radar coordinates with respect to (freely falling, massive) particles, the latter represented by a class of unparametrized curves, which provide the structure of differentiable manifold, see EPS axioms D 1 -D 4 . Then, the cone structure C is obtained by using two axioms, L 1 , L 2 , on the local character of light propagation around each event e. Indeed, L 1 states that given any particle P with some parameter t which passes through e, it follows that any event p (p ∈ P) can be connected with the particle by exactly two light rays 23 , while L 2 distinguishes two connected components for light rays. Moreover, L 1 also states that, if these two rays cross the curve at the events e 1 , e 2 , then g(p) := −t(e 1 )t(e 2 ) is required to be smooth in a small neighborhood of e. EPS claims that, then, C will come from the conformal structure of some Lorentz metric (a particular case of our Def. 4.6) and, so, we can speak about C-timelike directions.
(2) Spacetime is endowed with a projective structure P. This is achieved by means of two axioms, P 1 , P 2 , which model the free fall of particles. 23 Along the eventsẽ ∈ P, all the light rays fromẽ would trivially cross P atẽ; so, the function g below would be trivially extended as g(ẽ) = −t(ẽ) 2 . However, the points on P would be excluded in order to define the differentiable structure of the manifold by using radar coordinates (recall the example in footnote 24 below).
The first one states only the existence of a unique particle, represented by means of an (unparametrized) curve, for each event e and C-timelike direction at e. The second axiom states that, around each event e, one can find coordinatesx i such that any particle through e admits a parametrizationx(ū) satisfying: This equality is regarded as an infinitesimal law of inertia (consistently with Trautman [82]). By using (10), EPS argues that a projective structure, which is claimed to be compatible with some affine connection A, must appear. As a consequence, not only the original particles would be recovered as pregeodesics of A but one would also obtain pregeodesics at any direction, timelike or not. (3) Spacetime is a Weyl space (M, C, A), where A is an affine connection compatible with the cone structure C, in the sense that the lightlike C-pregeodesics are also A-pregeodesics. This is carried out by means of their axiom C, which matches particles and light rays. Specifically, this axiom assumes that, around each event e, any point in the C-chronological future of e lies on a particle through e. This will imply that the lightlike C-pregeodesics of the conformal structure (namely, the C-cone geodesics, see Remark 4.11 2) are also pregeodesics for the projective structure P in the step (2). Then, EPS claims that such a compatibility selects a unique affine connection A compatible with the projective structure. (4) Spacetime is endowed with a (time-oriented) Lorentzian metric g, up to an overall (constant) scalar factor. This is obtained by means of a Riemannian axiom, which takes into acccount that A has its own parallel transport and its curvature tensor; the axiom imposes the compatibility of (one of) these two elements with g. Indeed, they state that the Riemannian compatibility of (M, C, A) is equivalent to any of the following conditions: (a) the vectors obtained by A-parallel transport of a single one v at p ∈ M along two curves with the same endpoint q have the same norm at q (computed with any of the homothetic scalar products compatible with C q ), or (b) using Jacobi fields to construct arbitrarily close particles, the proper times of two of such particles are linearly related at first order, that is, the regular ticking of a clock for the first particle implies the regular ticking for the second one. About these axioms and proofs, EPS admits: "a fully rigorous formalization has not yet been achieved". Next, we will focus just on the relation of EPS approach with Lorentz-Finsler metrics. For progress on EPS approach, see for example [77].
5.2.
Keys of compatibility with Finslerian spacetimes. The fact that a Finslerian spacetime can fulfill the EPS axioms was already pointed out by Tavakol & Van den Berg [81], who considered the case of Berwald spaces. Now, our aim is to revisit precisely the compatibility of the four EPS steps with Finslerian elements, as well as [81]. [57] have argued against the role of smoothness of the function g(p) at e in the step (1). This differentiability becomes essential, because the equalities g(e) = 0, g ,a (e) = 0 allow EPS to find a metric tensor g ,ab (e) compatible with C.
EPS step (1). Recently, Lammërzhal and Perlick
Indeed, there are subtle differences at this point in comparison with the introduction of radar coordinates, which are used to settle the smooth (C 3 ) manifold structure of the spacetime. Certainly, EPS were aware of the existence of non-trivial subtleties, as one can read at the beginning of their subsection Differential Topology: "The reason that we do not take this structure [smooth manifold] for granted is that differentiability plays a crucial role in our introduction of null cones (...) and in the infinitesimal version of the law of free fall". The following three items must be taken into account in the EPS development: (i) The axioms D 1 -D 4 , which allow one to define radar coordinates, should apply to particles P, Q which do not intersect. Otherwise, spurious differential issues might appear even in the case of Lorentz-Minkowski spacetime 24 .
(ii) Axiom L 1 , however, considers the functions p → t(e 1 ), p → t(e 2 ) (which would be radar coordinates for some particle P through e) defined even on P. Moreover, this axiom ensures that the particular combination g(p) = −t(e 1 )t(e 2 ) is C 2 -differentiable on P too.
(iii) In the discussion above [21, Lemma 1], they explain that t(e 1 ) = t(e 2 ) = 0 occurs if and only if p = e = e 1 = e 2 (thus, p ∈ P) and they focus on this case. Then, EPS argues first that the differential g ,a (e) must be 0 by applying L 2 and, using C 2 differentiability, they show that the light directions must lie in the quadratic cone of the lightlike vectors of g ,ab (e).
Recall, however, that there is no physical justification about why g must be differentiable or C 2 . Notice that g is constructed from the functions e → t(e 1 ) and e → t(e 2 ), which are not smooth even in the Lorentz-Minkowski spacetime (see Example 5.1 below). This assumption on the product t(e 1 )t(e 2 ) yields a posteriori the quadratic character of the cone, forbidding more general cone structures.
From a purely mathematical viewpoint, the smoothness issue on the radar coordinates above would be similar to the differentiability of the radial coordinate r of a normed vector space at 0: r is never smooth at 0 and r 2 is smooth if and only if the norm comes from a Euclidean scalar product (Rem. 4.2(5)). So, such an a priori assumption would be completely unjustified from a mathematical viewpoint too, indeed: (a) There are norms with an analytic indicatrix (thus, analytic away from 0) which do not come from a scalar product. For example, on R 2 , when the indicatrix is equal to the curve in polar coordinates ρ(θ) = 1 + ǫ sin θ for small ǫ > 0 (so that it is strongly convex). (b) Euclidean scalar products are very particular cases of analytic norms.
That is, the apparently mild EPS requirement of differentiability at 0 becomes even stronger than analyticity for a norm.
Example 5.1. Let us see the role of smoothability for the EPS function g obtained by using a pair of radar coordinates with respect to a particle (according to EPS, one should take two pairs of radar coordinates by choosing two particles). We will work on M = R×R 3 . Let t : R×R 3 → R be the natural projection, consider any Minkowski norm F 0 on R 3 and take sphericaltype coordinates (r, θ, ϕ) on R 3 (up to suitable points) with θ, ϕ, the usual spherical angles and r ≡ F 0 ; then, extend the functions r, θ, ϕ to R × R 3 in a t-independent way. Let C be the natural (constant) cone structure given by t(p) = r(p) and regard the t-axis as a particle P. The corresponding radar coordinates are t ± r and thus, the EPS g is g(p) = −t 2 (p) + r 2 (p). This function is smooth at 0 if and only if F 0 comes from a Euclidean scalar product 25 . Anyway, the cone structure is smooth, because it is determined by the cone triple (dt, ∂ t , F 0 ) and, so, it is compatible with a smooth Lorentz-Finsler metric L (indeed, a Lorentz-Minkowski norm), see Rem. 4.11 (4). As stressed in the item (5) below Rem. 4.13, the fact that −dt 2 + F 2 0 is not smooth at ∂ t neither contradicts the existence of a smooth L nor introduces any issue of smoothability. (2). The way how EPS deduces the existence of the projective structure P from the infinitesimal law of inertia (10) consists in rewritting this last formula in arbitrary coordinates to obtain [21, formula (7)]
EPS step
x a + Π a bcẋ bẋc = λẋ a (11) where λ depends on the parameterization x a (u) of the curve and Π a bc depend on x a . These functions are called the projective coefficients, as they would determine a projective structure P compatible with some affine connection.
However, if one allowed the functions Π a bc to depend on the direction of the velocitiesẋ j , then Π a bc (x i ,ẋ j ) could represent the formal Christoffel symbols for a Lorentz-Finsler metric L (indeed, for its A-anisotropic connection, see Remark 4.11, item 6). Thus, the solutions of (11) would be pregeodesics for L which satisfy the law of inertia (10), up to the following issue of C 2differentiability of the chart coordinates at the origin.
The existence of normal coordinates in C-timelike directions (which is ensured for any A-anisotropic connection 26 ) would be the natural mathematical translation for the law of inertia. However, the Christoffel symbols of a Lorentz-Finsler metric might not be even continuous at the origin by the trivial reason that these symbols may depend on the direction but they cannot vary along each direction (they are homogeneous of degree 0). Thus, its exponential map is not guaranteed to be C 2 at the origin unless the anisotropic connection is affine (i.e., it does not depend on the direction). It 25 Of course, one could introduce a spurious differential structure on R 4 so that r 2 becomes smooth for a non-Euclidean F0, but this would not be natural by any means. 26 In principle, the normal coordinates can be defined when the anisotropic connection is defined for all the vectors in T M \0, but it is always possible to extend theĀ-anisotropic connection to all directions locally (see [50,Remark 6.3], where the Lorentz-Finsler case is considered in detail). These coordinates are obtained using the exponential map in a neighborhood as in [ Summing up, we emphasize: (a) the coordinates provided by the exponential map of a Lorentz-Finsler metric at any event e are smooth along the half-lines starting at e and they satisfy (10), and (b) to exclude anisotropic connections because of their lack of smoothness at 0 is a subtle mathematical issue and (as in the discussion of the Step 1 in §5.2.1) this is not justified in EPS neither physically nor mathematically. Thus, the law of inertia should be regarded as compatible with Lorentz-Finsler metrics according to our definition (where the directions outside the causal cone are not taken into account), including even the improper case in Note 4.12). (3). The compatibility of (C, P) as a Weyl space with a (unique) affine connection A obtained by using EPS axiom C becomes a subtle question. On the one hand, Trautman [83] claimed the necessity of a detailed proofs in his review on the reprinted EPS article and, shortly after, this author and V. Matveev [66] characterized when a pair (C, P) is compatible. On the other hand, the notion of Weyl space as the triple (M, C, A) given by EPS does not coincide with the standard one of Weyl geometry 28 . Some authors questioned whether such an EPS structure permits to define a standard Weyl one as well as EPS development at this step. However, very recently, this question has been positively answered by Matveev and Scholtz [65], vindicating the EPS approach.
EPS step
We emphasize that the EPS compatibility axiom C can be stated with no modification in the case that C is any cone structure and P is the projective class of pregeodesics of anyĀ-anisotropic connection defined on all the Ccausal directions (as already commented, C determines intrinsically cone geodesics extending those in EPS conformal cones, Remark 4.7(3)). So, the possibility to extend previous results to this setting should be explored.
EPS step (4).
In the EPS spirit, the Riemann axiom would be any (minimum, physically well-motivated) assumption making a compatible triple (C, P, A) also compatible with a Lorentzian metric, as the conditions labelled (a) and (b) at step (4). However, in orden to state now a Finslerian axiom, one should notice that these conditions involve A and, so, they might depend on the way how the previous step is solved.
Anyway, it is worth pointing out some reasons which would support the convenience of such a Finslerian axiom. On the mathematical side, the results collected in Rem. 4.11 (parts (3) and (4)) show a natural consistency: (i) any C can be associated with a Lorentz-Finsler metric L, (ii) any other associated L ′ is anisotropically related to L, and (iii) the lightlike pregeodesics 27 This means that its Chern-Rund connection defines an affine connection on the underlying manifold, see [80] for quite a few of characterizations. 28 In modern language, a Weyl geometry on M is a conformal structure C endowed with a connection on the R + -principle bundle P → C, where the fiber of P at each Cp is the class of homothetic Lorentzian scalar products compatible with Cp (see for example [27]); such a notion was considered in references on EPS as [23].
of all the associated Lorentz-Finsler metrics agree with the cone geodesics of C. On the physical side, the standard chronometric approach is reduced to the determination of the indicatrix of the observers at each event and this would depend only on the behaviour of clocks and measurements of proper time 29 . Notice that, in the Finslerian case, this behaviour would not be restricted by any condition of quadratic compatibility (but only by a mild overall concaveness and asymptoticity to C).
5.2.5.
Finslerian examples strictly compatible with EPS. As we have explained, the requirement of C 2 smoothability at 0 for cones and geodesics is the main gap in the EPS approach. However, Tavakol and van der Berg [81] showed Finslerian examples which are even compatible with this requirement. Next, let us analyze these and other possible examples of Finsler EPS compatible (FEPS) spacetimes. A very simple FEPS example would be the following. Consider an affine space endowed with any Lorentz norm L 0 with the same cone as a Lorentzian scalar product ·, · (L 0 can be obtained by perturbing the indicatrix of ·, · , as explained in Remark 4.11, item (4)). Then, the cone and geodesics of L 0 would satisfy all the EPS axioms, including those of C 2 smoothness at 0. Here, the key is that the affine parallel transport preserves both, the indicatrix of L 0 and ·, · .
Remark 5.2. Tavakol & van der
Berg examples also obey this pattern, even though they are more refined and interesting. Indeed, they are Berwald type spacetimes constructed by using an auxiliary Lorentz metric g. The fact that they are FEPS examples becomes apparent, because they have the same cone and geodesics as g.
However, we emphasize that these FEPS examples are not in contradiction with the EPS conclusions. Indeed, the above examples only show that the physical elements C, A, under the EPS restrictions, may be compatible with two different geometric structures, the Lorentz g and Lorentz-Finsler L metrics. To decide which of them would be physically more appropriate would depend on further physical input. In absence of such input, the use of g would be mathematically simpler. Nevertheless, this input might appear from the measurements of proper time, as suggested at the end of §5.2.4.
In order to obtain a true Finslerian contradiction with EPS conclusions, one should construct a Lorentz-Finsler metric L with associated cone C and anisotropic connection A satisfying: (i) the EPS C 2 requirements, (ii) the cone geodesics of C are pregeodesics of A, (iii) C is invariant under the A-parallel transport, and (iv) A is not compatible with any Lorentz metric. However, the following known results on linear algebra and Finsler metrics suggest the difficulty to find such a contradiction. Notice that the Finslerian results have been obtained in the positive definite case (the last one after the original EPS paper) and their suitable extensions to the Lorentz-Finsler case is not always clear: (a) The square of a norm is C 2 at 0 if and only if it comes from an Euclidean scalar product ( [86], §4.2, item 5 (b)). As a consequence, the C 2 requirement (i) implies the Lorentzian character of the cones, §5.2.1.
(b) A linear map between two Lorentzian vector spaces is homothetic if and only if it preserves the lightcones 30 . As a consequence, if C is compatible with a Lorentzian metric g (as established in (a)), the preservation of C under A-transport in (iii) implies that this transport must be a g-homothety; in particular, the Riemann axiom (its version (a) in §5.1, item 4) is satisfied.
(c) The exponential of a Finsler metric is smooth at 0 if and only if it is Berwald [2]. As a consequence, the law of the inertia (with the C 2 requirement (i)) would imply that only Berwald-type Lorentz-Finsler metrics could be admitted, §5.2.2).
(d) All Finsler metrics of Berwald type metric are affinely equivalent to a Riemann space, that is, their affine connections are Levi-Civita for Riemannian metrics (Szabó,[79]).
Notice that, in the case that a suitable Lorentz-Finsler version of this last result existed (taking into account, eventually, the requirement (ii)), this would imply that FEPS is also compatible with a Lorentz metric, that is, the requirement (iv) could not be fulfilled if (i), (ii), (iii) held.
Remark 5.3.
Recently, Fuster et al [28] have shown that there are Berwaldtype Finsler spacetimes which are not affinely equivalent to a Lorentz metric. However, they contain non smooth directions; this must be taken into account for the comparison with Szabó's result or the possible contradiction with EPS. Anyway, they show a minimal violation of smoothness. Indeed, their examples include improper Lorentz-Finsler metrics L, satisfying both (i) and (ii) in Note 4.12 and, moreover, they satisfy that some power L r (with r > 1 and integer) is smooth even at the lightlike directions of their cone, see Remark 6.3.
Remark 5.4. Recently, Hohmann et al [39] have classified the Berwald spacetimes which are spatially homogeneous and isotropic. Among them, they have found a genuinely Finslerian class (with cones equal to classical FLWR spacetimes). As a proper Finslerian extension of relativistic cosmological spacetimes, the interest of this FEPS class is remarkable (even if it is not clear that they yield a true contradiction with EPS or not).
5.3.
Constructive EPS approach vs observer's approach. In order to compare EPS approach and ours, notice first that EPS distinguishes between a chronometric approachà la Synge [78] and their constructive approach. The former one regards the concepts of particle and standard clock as basic, and introduces the metric g as fundamental. So, it regards as primitive an easily measurable physical quantity (proper time) and a single geometric structure (the metric), the latter encoding all the other geometric elements in a simple way. As a consequence of these advantages, the chronometrical approach is very economical. However, EPS also pointed out drawbacks such as: (1) the impossibility to construct the metric from the behavior of the clocks alone, 30 See for example [8, §2.3].
(2) the inclusion by hand of the hypothesis that metric geodesics will correspond with free motion and, then, (3) the expectation that the clocks constructed by means of freely falling particles and light rays will agree with the metric clocks. This motivated their constructive approach starting at basic elements (events, particles, light rays) and axioms close to the physical experience. Certainly, EPS aimed to deduce the metric structure from their axioms. However, the difficulties found in some points (as explained in §5.2.3, the step (3) would have been solved only very recently) as well as the necessity to introduce a Riemannian axiom at the end, makes the procedure somewhat awkward.
In contrast, our approach is neither chronometric nor constructive; instead, it only appeals to the way how we measure. As such a procedure is complex, one starts at the ideal situation when some symmetries among measurements are assumed (our two postulates). Under our viewpoint if such symmetries did not hold at all, it would not be clear even the meaning of the verb "to measure". However, in the case that the symmetries can be invoked as an approximation, the meaning of measurements can be recovered. Then, the emergence of some geometric structures resembles a sort of experimental Klein's Erlangen program.
Notice that only hypotheses on the way of taking coordinates of space and time (inertial reference frames, observers) were assumed. It is noteworthy that only some few possibilities emerged for the geometry of spacetime when these symmetries hold in a strict way. From the standard physical viewpoint (close to philosophical realism), the fact that the space, time and matter allow us to measure in some specific way should be interpreted as an evidence about the power of the emerged geometric structures in order to describe the physical spacetime.
Anyway, it is also worth noticing that our final geometric model of spacetime (a manifold endowed with a Lorentz-Finsler metric defined only on the setĀ of causal vectors for a cone structure) is compatible with EPS approach. Indeed, as shown in the previous subsection, EPS excluded the properly Finslerian case only due to two mathematical subtleties about unjustified restrictions of smoothness in radar coordinates (step (1)) and the law of inertia (step (2)). As pointed out in our discussion at §5.2.4, in the case that C (or the Weyl pair (C, P) in the step (4)) were not assumed to be compatible with a Lorentz metric, the Riemannian axiom might be replaced by a Finslerian one which would involve only the behaviour of clocks.
Finally, we emphasize that EPS approach also gives a strong support to our hypothesis that, in principle, the Lorentz-Finsler metric must be defined only at the causal directions inĀ: no basic element in the EPS approach (particle, light rays, radar coordinates, echoes) involves non-causal directions.
Lorentz symmetry breaking
The implications of the introduction of Finslerian geometry may be more transparent if we focus on the Lorentz symmetry breaking which occurs when Lorenz-Finsler norms are used to extend Special Relativity (i.e., when one considers only the second non-linearization in Table 1). We will center around this breaking from our theoretical viewpoint; for a more experimental one, a review on tests of Lorentz invariance (which includes Lorentz-Finsler possibilities and discussions on von Ignatowski approach) was updated in 2013 by Liberati [61].
6.1. Modified Special Relativity. Assume that the spacetime has a structure of affine n-space Aff and it is endowed with a Lorentz-Minkowski norm L 0 rather than a Lorentz scalar product ·, · 1 . Roughly speaking, this is a generalization of Special Relativity where, instead of dropping Postulate 1 (as in General Relativity), we are dropping Postulate 2. Thus, one has affine reference frames but no IFR's; however, one can still assume that any physically relevant vector basis B will be composed of a timelike vector with respect to the cone C 0 associated with L 0 and three non-causal ones spanning a spacelike hyperplane Π (Π ∩ C 0 = ∅).
Remark 6.1. There is a mathematical analogy between the transition from ·, · 1 to L 0 and the one from Special to General Relativity. The latter goes from the point-independent ·, · 1 to a Lorentz metric g p which depends on the point p in an n-manifold M . In the former transition the vector space V associated with Aff is endowed with a Lorentzian metric g v which depends on the direction of v ∈Ā 0 for some cone structure C 0 . What is more, the independence of g v with the radial direction (g v = g λv for λ > 0) makes relevant only the variation of v on a topological (n − 1)-spherical cap.
6.1.1. VSR and GVSR. The transition from ·, · 1 to L 0 appears naturally in the so-called Very Special Relativity (VSR). This was introduced by Cohen and Glashow [18] who realized that most physical theories (including those satisfying the charge-parity symmetry) which are invariant under certain proper subgroups of the Poincaré group have the symmetries of Special Relativity. Thus, the cases when VSR does not imply Special Relativity appear as a convenient arena to test violations of Lorentz invariance. Remarkably, Bogoslovsky [12] had already studied the most general transformations which preserve the massless wave equation and he found the invariant metric: where β is a ·, · 1 -lightlike dual vector and 0 ≤ b < 1, a constant 31 .
Remark 6.2. (1) When L Bog is restricted to the future causal cone C 0 of ·, · 1 , then it becomes a Lorentz-Minkowski norm, up to the requirement of differentiability at the lightlike vectors, that is, L Bog is an improper Lorentz-Minkowski norm according to Note 4.12. Indeed, L Bog is not smooth at C 0 , but it trivially satisfies the properties (i) and (ii) of that note as, in this case, the A-anisotropic Chern connection of L Bog is the affine connection of the Euclidean space.
(2) Recall that the restriction of L Bog to the causal C 0 -vectors is natural not only because of the physical reasons discussed in the previous sections, but also because the vectors where L Bog vanishes include the ·, · 1 -spacelike ones in the kernel of β, and these vectors do not seem to admit any natural interpretation as directions of light rays.
As a generalization of VSR for curved spaces, General Very Special Relativity (GVSR) drops the invariance of VSR by translations. This was introduced by Gibbons et al. [32], who pointed out the Finslerian character of GVSR. Relevant examples of Lorentz-Finsler metrics in VSR and GVSR have been recently found, see [29,30] and references therein. Remark 6.3. A natural generalization of Bogoslovski metric to GVSR is obtained by regarding ·, · 1 and β as a Lorentz metric and arbitrary 1-form on a manifold M . Fuster et al. [28] even consider the generalization obtained by multiplying the latter by a homogeneous factor type (c + mβ 2 / ·, · 1 ) p , where c, m, p ∈ R. Among this type of metrics, they found the Berwald spacetimes non-affinely equivalent to a Lorentz one cited in Remark 5.3.
6.1.2.
Smoothability at the cone and birefringence. By starting at our previous study of Bogoslovsky metric, we can go further in the issue of the differentiability of the Lorentz-Finsler metrics at the cone, by comparing our approach with the one introduced by Pfeifer and Wohlfart (PW) [73, §A], which has been modified sometimes [37,39].
These authors considered a definition of Lorentz-Finsler spacetime and metric which permits degenerate directions. This definition becomes consistent with our notion of improper Lorentz-Finsler metric in Note 4.12 and the conditions (i) and (ii) therein. Essentially, PW considers, instead of a Lorentz-Finsler metric L as above, a function L r which is r-homogeneous for some r ≥ 2, and they relax the non-degeneracy of the fundamental tensor g allowing a set of zero-measure where it degenerates. Remarkably, the smoothness of L r does not imply the smoothness of the two-homogeneous function L = L 2/r r along the cone C. Nevertheless, the A-anisotropic connection (which is well defined on a dense set of timelike vectors) can be then extended to the lightlike ones (see [73,Th. 2]). In this case, L = L 2/r r lies under our definition of improper Lorentz-Finsler metric with a connection extendible to C.
However, for most choices of b, Bogoslovsky metric (12) is an example which does not lie under PW definition, in spite of having a regular cone and a connection extendible to it (indeed, both of them the same as Lorentz-Minkowski spacetime). Nevertheless, they remain under the definition in the variants [37,39] and they are always improper Lorentz-Finsler in the sense of Note 4.12, which seems to provide a suitable geometric framework for these cases. Indeed, let us analyze a generalization of Bogoslovsky metrics from norms to arbitrary manifolds considered in [30]. Let L Bog = g(·, ·) (1−b) (β ⊗ β) b , where g is a (time-oriented) Lorentzian metric and β a 1-form in a manifold M ; notice that, whenever β remains g-causal, the future cone C of g agrees with the lightlike vectors for L Bog and this metric is well-defined on all the g-causal vectors. Let r = 1/(1 − b) and L r Bog = g(·, ·)(β ⊗ β) m , with It is not difficult to see that g L r Bog has the same signature as g when β(v) > 0 (use for example the criterion in [50,Prop. 4.10]), but it is trivially equal to zero, when β(v) = g(v, v) = 0 and 1/2 < b < 1 (observe that in such a case, m > 1). As a consequence, if β is always g-timelike, the generalized Bogoslovsky metric is always a Finsler spacetime according to PW definition. When β is g-lightlike, there will be lightlike directions of L Bog which do not satisfy the conditions of PW, no matter if the connection is extendible to the (regular, Lorentzian) cone C or not; however, they will be improper Lorentz-Finsler metrics and satisfy also the definitions in [37,39].
An issue beyond the lack of smoothness is birefringence. This phenomenon occurs in some crystals and it is described by using two cones, each one with a Lorentz or Lorentz-Finsler metric. It is related with the dispersion of the light with different wavelengths in the crystal. Some authors have pointed out the possibility that these dispersions occur also as a constitutive element of the spacetime [56,74].
One way to describe the lightrays when there is birefringence is by using the product of two Lorentz metrics L = √ L 1 L 2 . Essentially, the lightrays are described then by the lightlike geodesics of this product; indeed, when one of the metrics L 1 vanishes and the other does not, then a metric anisotropically conformal to L 1 is obtained. However, some additional subtleties appear. For example, when the lightcones C 1 , C 2 of the metrics are one inside the other, say C 1 < C 2 , this product is an improper Lorentz-Finsler spacetime on the domainĀ 1 determined by the interior cone C 1 (see [50,Appendix A.5]). Notice, however, that the situation would be more complex when the position of the cones is arbitrary. Assuming that the intersection A 1 ∩ A 2 is non-empty at every p ∈ M , then each (A 1 ) p ∩ (A 2 ) p is convex. However, its boundary may have non-smooth directions and L would become an improper Lorentz-Finsler metric.
Under our viewpoint, the existence of different light cones may be a worthy possibility (see the discussions around Def. 2.8). However, in principle, our mathematical framework would consider separately the cones. Indeed, a possible way to describe phenomenons related to the dispersion of light would be to introduce a spaceM = M × R + with an extra dimension representing the refractive index n. Then, a Lorentz-Finsler metric L n would appear for each n and the different cone structures C n on T M × {n} would project on T M . The birefringent model would correspond with an effective description of polarization by using two refractive index, that is, the projection on M of a limit case onM where only two values of n would become relevant.
6.2. Anisotropic speed of light. In subsection 3.3, the possibility of a pointwise variation of c was discussed for Lorentz metrics. As explained there, an additional element to the metric structure (such as a pointwise and, so, the observer v could conclude c v (u 1 ) = c v (u 2 ), i.e., the speed of light depends on the direction. measurement of the fine structure constant α) was germane. Next we will consider some different possibilities for the measurement of a varying speed of light (VSL) proper of the Lorentz-Finsler case. The underlying reason of the difficulty to measure a VSL in the Lorentzian case relied on the fact that the Levi-Civita parallel transport is a conformal transformation (indeed, an isometry), thus, mapping always affinely lightlike cones into lightlike cones. A first possibility in the Finslerian case is: (VSL1) Lightlike cones at different points may be non-affinely equivalent 32 . Clearly, this should be an indicator of the existence of different speeds of light at different points. Anyway, at the end such a possibility would be possible because a Lorentz-Finsler metric L provides a breaking of Lorentz symmetry at each point. This would turn out in the existence of anisotropies of the speed of light emitted from a single event p in different directions. So, let us focus on this possibility, which includes Lorentz-Minkowski norms in affine spaces. (VSL2) At an event p ∈ M , a single observer v ∈ Σ p finds distinct speeds of light at different directions at its rest space (T v Σ p endowed with g v ). At least from a purely geometric viewpoint, this could happen as follows. The cone C p will intersect the rest space T v Σ p at some strongly convex (n − 2)-hypersurface S v , say, the sky observed by 33 v, see Fig. 2. Then, for u ∈ S v the value of c v (u) := g v (u, u) can be regarded as a u-dependent speed of light measured by v (namely, the spacelike length covered by the light in the direction of u in a unit of time). 32 From a mathematical viewpoint, the property that lightcones are affinely diffeomorphic is a Berwald-type property. Recall that one of the characterizations of Berwald manifolds in the class of the Finsler ones is the existence of a torsion free derivative operator such that the parallel translations with respect to it preserve the Finsler norms of tangent vectors [80,Prop. 6]; in particular, the norms at different points are isometric. 33 Equally, the rest space and the sky could be regarded as the hyperplane T 0 v Σ parallel to TvΣ through the origin 0 ∈ TpM and the projection S 0 v of Sv along the direction vp into T 0 v Σ, respectively. This is a usual identification in General Relativity, [75]. Figure 3. In the plane π spanned by the observers v, v ′ , the tangent lines to Σ p in π, l and l ′ differ. Then if u ∈ S v and u ′ ∈ S v ′ , possibly, g v (u, u) = g v ′ (u ′ , u ′ ). So the observers v, v ′ measure different speeds of light in their common plane π.
Remark 6.4. (1) This u-dependent speed of light appears because of the anisotropies of Σ p . Thus (in contrast to (VSL1)) it might happen even for a Lorentz-Finsler metric compatible with the cone structure of Lorentz-Minkowski spacetime (or any other Lorentzian manifold).
Indeed, at each p ∈ M , the metric g v depends on the space of observers Σ p close to v. So, if Σ p were the space of observers for the Lorentz-Minkowski metric L, we could perturb it around some v ∈ Σ p in order to obtain the space of observers Σ ′ p of an anisotropically equivalent Lorentz-Finsler metric L ′ satisfying: v ∈ Σ p ∩ Σ ′ p , and Then, the skies of v for L and L ′ are equal but, in general, (2) It is also worth pointing out that two different observers v, v ′ ∈ Σ p will span a single plane Π ⊂ T p M which can be regarded as a timelike one for both g v and g v ′ . The intersections of Π with the rest spaces T v Σ p , T v ′ Σ p will give two lines l and l ′ (which are spacelike for g v and g v ′ , respectively). Even though l and l ′ are different they would represent the "spacelike direction where the other observers lies". However, the speed of light in the (consistently oriented) directions of l and l ′ may differ, that is, c v (u) = c v ′ (u ′ ) for u ∈ l and u ′ ∈ l ′ (see Fig. 3).
The possibility to measure (VSL2) might be somewhat naïve because: (a) experimental difficulties for the measurements of the involved geometric elements g v , S v (or the relation between g v and g v ′ ), might appear, and (b) in the case that C were compatible with a Lorentzian metric, then one should speak on anisotropies of the space for massive particles (or, eventually, for measurement instruments) rather than for the propagation of light.
Anyway, there is an anisotropic propagation of the light in the case of a breaking of the conformal Lorentz symmetry, namely: (VSL3) At an event p ∈ M , the cone C p is not compatible with any Lorentz scalar product.
In principle, this could be measured by using the trajectories of lightrays even in the case of Lorentz-Minkowski norms on an affine space (so that the lightrays are straight lines). Indeed, when L comes from a Lorentz scalar product g, then g v depends only on p (g v ≡ g p ), S v becomes a sphere in T v Σ centered at 0 of radius r = 1, the second fundamental form σ (with respect to the inner direction) of S v can be identified with the restriction of g p /r 2 to S v and the speed of light is regarded naturally as isotropic. However, in the case of a Lorentz-Minkowski norm L, the second fundamental form σ u at some u ∈ S v may satisfy, for example, σ u > g v /g v (u, u) (as quadratic forms on T u S v ). Then, the vectors of S v close to u can be regarded as "shorter" than those in the Lorentz metric case, that is: the speed of light measured by v ∈ Σ p at the direction u ∈ S v := T v Σ p ∩ C p is bigger (resp. smaller) than the speed of light in the directions close to u when More precisely, if λ (> 0) is the eigenvalue of σ u in the direction w ∈ T u S v then 1/λ would rate the increasing of the speed along the direction w.
We emphasize that the previous procedure would allow the observer v to realize that an anisotropy holds either in Σ p or in C v . The fact that g v only depends on the behavior of Σ p around v prevents to disregard the first case. However: C v is compatible with a Lorentz scalar product if and only if S v is an ellipsoid, and the latter property can be checked in purely affine terms on T v Σ (namely, it holds when it vanishes the cubic form C(X, Y, Z) = ∇ X σ ξ (Y, Z) 34 constructed from the second fundamental form σ ξ and the induced connection ∇, both for the Blaschke normal ξ, see [70,Theorem II.4.5].
Remark 6.5. The property ∇σ ξ ≡ 0 implies the intrinsic anisotropy of the speed of light, but it does not assign an "absolute" speed of light c v (u) (which would depend on the Lorentz metric L as in (VSL2)). However, one has the possibility to measure variations on the speed of light around each u. The qualitative behavior of such variations rely on the cone structure instead of the metric (compare with Remark 6.4).
6.3. Matter as anisotropy and Quantum Physics. Clearly, a Lorentz-Minkowski norm or properly Lorentz-Finsler metric would appear if some type of anisotropy were detected in physical spacetime (see for example [71] and references therein). However, we emphasize: The existence of an anisotropy does not mean necessarily a "pre-existing spacelike anisotropy of empty space". Indeed, the existence of matter induces anisotropies in causal directions, and this might be reflected in the indicatrix of L. 34 Observe that the cubic form coincides with the Matsumoto tensor of the pseudo-Minkowski norm having the affine hypersurface as indicatrix up to multiplication by a function (see for example [68] or [51]). The Matsumoto tensor is zero when the pseudo-Finsler metric comes from a scalar product.
This possibility is stressed in our formalism, as L is defined only on causal directions. Even though this idea is quite speculative, let us explain it briefly.
Consider first that an event p ∈ M is crossed by a particle γ, γ(0) = p, with mass m > 0. In this case, γ ′ (0) selects a privileged direction at p, and this would introduce an anisotropy in the space of observers O p (with respect to a background Lorentz metric). This perturbation might be made quantitative in some ways; for example, by introducing a perturbation in the curvature of O p around p proportional to m. In the case of having a stress energy tensor T in an initial background Lorentzian metric g, algebraic properties of T (as the energy density or pressure for perfect fluids) might induce the perturbation of O p .
These perturbations, even if tiny, might have interest at Planck scale. Indeed, it is commonplace to assume that nonlinear modifications of linear Schrödinger equation might lead to an effective collapse which resolves the measurement problem (see for example [25, §7]). So, the nonlinear framework of Finsler spacetimes opens possibilities in this direction which are worth to be studied further.
Remark 6.6. Recent examples of Finslerian spacetimes, as the model of relativistic kinetic gases in [38], can be understood also under the above viewpoint. Their authors explain that an ensemble of a large number of P individual interacting and gravitating point particles can be described at three levels: (1) individual particles, (2) description as a kinetic gas, by using a 1-particle distribution function (1PDF), which retains information about velocities, and (3) description as a fluid, where velocities at each point are also averaged.
That reference develops the second viewpoint, where a Lorentz-Finsler model emerges. However, one should take into account that, certainly, the individual particle description is the extreme idealization of the gas, as these particles should be quantum objects. So, the Lorentz-Finsler metric might be directly the most natural description as a semi-classical limit.
Conclusions
Along this article, we have obtained goals in the following three directions: (1) A revision of the foundations of the theories of non-quantum spacetime from the viewpoint of how space and time are being measured, carried out in three parts. 1a. In the first one (doubly linearized models, §2) the previous approaches in this direction [40,9] have been sharpened and simplified, and the four compatible models of spacetime have been concisely described. In particular, we have introduced the hypothesis of apparent temporality. This hypothesis is enough to obtain the models with no additional hypotheses on, for example, group actions, Theorem 2.4. Moreover, it will yield time-orientability in three of the models (the temporal ones) and it will underlie our definition of Finsler spacetime, where the Lorentz-Finsler metric is defined only on the causal vectors of a single cone structure. The other two parts consider their natural non-linear generalizations.
1b. The first non-linearization §3 is carried out in the spirit of the generalization from Special to General Relativity. In a natural way, the previous four models lead to a signature-changing metric, with Leibnizian structures (and their dual) in the degenerate part and to pointwise variations c(p) of the speed of light which are briefly discussed. It is worth pointing out that, consistently with the discussion at the end of §2, here c(p) appears as the supremum of velocities between observers at each event p; however, it becomes identifiable with the speed of propagation of the light because it propagates in vacuum (and c(p) is the unique common speed different to 0 measurable by all the observers at p).
1c. Focusing in the Relativistic case, the second non-linearization §4 is obtained just by removing the relativistic quadratic restriction (intrinsic to Lorentzian metrics) on the space of observers. This leads directly to our definition of Finsler spacetime. Its mathematical background and subtleties (including issues on differentiability specific to the Fisnler case which will be relevant later) are also introduced concisely.
(2) A critical revision of EPS approach §5 with a triple aim.
2a. The first aim was to examine which EPS assumptions forbid nonrelativistic Lorentz-Finsler metrics to emerge, taking into account previous studies [81,57]. We have found that these assumptions appear neatly at two steps ( §5.2.1, 5.2.2) and they have the same origin: they impose certain conditions of C 2 -differentiability at 0 (in each tangent space T p M ) of some geometric quantities which, by its very nature, forbids any anisotropy and, mathematically, leads to the quadratic restriction on the metric (the latter, essentially, by an elementary computation in [86,Proposition 4.1]). Intuitively, this condition can be understood as follows: if one has any element in a vector space depending only on the direction (as the fundamental tensor of a non-Riemannian Finslerian metric or the Christoffel symbols of a non-affine anisotropic connection) then this element cannot be even continuous at 0, as this vector can be regarded as the limit of vectors coming from different directions. Of course, such a condition would not be reasonable from a mathematical viewpoint (it would exclude as non-smooth even all the analytic Finsler metrics) but also from a physical one. Indeed, it would be even preferable to assume directly the isotropy in different directions as a physical assumption, as such an isotropy might be natural in some cases. In contrast, the assumption on C 2 -differentiability at 0 a priori may be misleading and it interferes with the assumption on radar coordinates (which is regarded as involved by many authors, see for example recent [65, footnote 7]). For the sake of completeness, we have also studied the Finslerian examples which are compatible with the EPS axioms (including C 2 differentiability at 0, as in [81]) and discussed at what extent they contradict EPS conclusions, §5.2.5.
2b. The second aim was to compare EPS, as well as the standard chronometric approach, with ours. As an important difference between the philosophies of the previous approaches and ours, our postulates do not involve the physical objects which will be measured but the way how we can measure physical objects. Indeed, the possibility to make meaningful measurements of the physical spacetime relies on the existence of some mild symmetries among the observers, so that different measurements (carried out at different events and by different observers at each event) can be compared. As stressed here, such symmetries become then apparent in the observers space O and, then, allow one to determine some geometries for the physical spacetime. The fact that the exact symmetries of O in the initial linearized model may be only approximate, leads to General Relativity, modified Special Relativity and the general model of Finsler spacetimes.
2c. As an extra bonus of the previous two aims, EPS approach can be also used to obtain Lorentz-Finsler metrics for the geometry of spacetime. Indeed, removing the criticized hypotheses of C 2 smoothability, any Lorent-Finsler metric L will be compatible with the two first steps of EPS. The other two steps should justify the uniqueness of L up to an overall factor. These steps would be involved mathematically (indeed, the third one would have been justified for the original EPS approach one only recently [65]). However, as suggested in §5.2.4, only the behavior of clocks would be enough to construct O and, then, to characterize L. Even though this behavior becomes natural in the chronometric approach rather than in EPS, the main objection of these authors to chronometrics (part (1) in §5.3) would be solved. It is also worth emphasizing that, in this way, our procedure becomes simple and rigorous at all the stages.
(3) A summary of some issues related to Lorentz symmetry breaking discussed from the introduced viewpoint. This includes: 3a. Very Special Relativity and Pfeifer & Wohlfart (PW) definition of Finsler spacetimes §6.1. They are particular cases of Finsler spacetimes with non-smooth lightlike directions (and, so, they do not satisfy properly our definition of Lorentz-Finsler metric. However, they are endowed with a regular cone structure C and an isotropic connection extendible to C and, so, most of their relevant properties hold (see Note 4.12). The case of Bogoslovsky metric and its generalization to arbitrary manifolds is studied specifically. Moreover, the way to fit the phenomenon of birefringence in our setting is also discussed.
3b. Three ways to detect the possibility that the speed of light varied with the direction §6.2. The first one would be a pointwise variation which would go beyond the one discussed in General Relativity, which relies on the possibility that a cone structure has cones at different points non-affinely isomorphic. The other two ways focus on the Lorentz symmetry breaking at each point p ∈ M . The first one is a geometric analysis which would detect the anisotropies of the Lorentz-Finsler metric L (and, then, of the measured speed of light) in different situations, namely: when a single observer looks at different spacelike directions (Fig. 2) and when two observers at p compare their spacelike measurements (Fig. 3). Because of these anisotropies of L, the measured speeds of the light might be different even for a cone structure compatible with a quadratic (relativistic) cone. Thus, the other procedure focuses on the specific properties of the cone and would detect its lack of quadraticity.
3c. A justification of Lorentz-Finsler anisotropy. Typically, Finslerian anisotropy is considered as a spacelike anisotropy. Notice, however, that our Lorentz-Finsler metrics are not even defined on spacelike directions. As extensively argued along the article, Lorentz-Finsler anisotropies appear on the space of observers. So, it is natural to think that they might be associated with the distribution of mass and energy. These might be anisotropic even if one thought that a "background isotropic vacuum" existed. In this vein, a possible link with Quantum Mechanics is suggested and further developments on this issue might be worthy. Summing up, this paper tries to provide physical grounds and precise mathematical formulations for the development of Lorentz-Finsler geometry and its relativistic applications. It is worth emphasizing that the applications, however, go beyond the relativistic setting. For example, an extra bonus has its roots in analogue gravity [5]. Indeed, the classical non-relativistic problem of Zermelo navigation is better understood by using Lorentz-Finsler metrics and the corresponding Fermat principle [15,50]. Then, on the one hand, the classical Finslerian/Zermelo viewpoint has applications to spacetimes [14,47] and, on the other, the Lorentz-Finsler viewpoint has applications for issues such as the propagation of fire spreading, quantum navigation and classical Finsler Geometry [62,33,49]. So, Lorentz-Finsler geometry and its applications appears as a fascinating area to be developed further. | 26,161.8 | 2020-03-01T00:00:00.000 | [
"Physics"
] |
Emerging Variants of the Integrative and Conjugant Element ICEMh1 in Livestock Pathogens: Structural Insights, Potential Host Range, and Implications for Bacterial Fitness and Antimicrobial Therapy
Horizontal gene transfer of integrative and conjugative elements (ICE) in bacterial pathogens of the bovine respiratory disease (BRD) complex has emerged as a significant cause of antimicrobial resistance (AMR) and therapeutic failure and mortalities in cattle. The aim of this study was to assess an AMR ICE occurring in Pasteurella multocida from a case of BRD, designated ICEMh1PM22 for its structure and host genome insertion site, and to identify consequences for host fitness and antimicrobial therapy. The modular structure of ICEMh1-like elements found in several related livestock pathogens was compared to ICEMh1PM22, and the repertoire of cargo genes in variable ICE modules was functionally categorized. AMR genes were identified as frequent additions to the variable modules of ICEMh1-like elements. Random PCR-based mapping of ICEMh1PM22-genome junctions in transconjugants provided evidence that ICEMh1PM22 integrates into the tRNA-leu for the UUG codon, and not into tRNA-leu for other codons. This was separately confirmed in the genomes of ICEMh1-like-harboring livestock pathogens. Bacterial genera harboring receptive tRNA-leuUUG were identified to establish the potential host range of ICEMh1-like elements. ICEMh1PM22-carrying transconjugants in P. multocida and Mannheimia haemolytica were less fit than isogenic strains without the ICE when grown without antimicrobial selection. This fitness cost was abrogated in the presence of subinhibitory concentrations of antimicrobials. Despite this cost, ICEMh1PM22 was retained in transconjugants in extended culture. To identify possible therapeutic efficiencies, antimicrobial combinations were screened for synergistic interactions against AMR ICEMh1PM22-carrying transconjugants. No antimicrobial combination tested exhibited synergistic interactions against AMR P. multocida or M. haemolytica harboring ICEMh1PM22. In conclusion, this study provided information on the structural variation of ICEMh1-like elements, refined the ICE insertion site and potential host range, and demonstrated the risk and consequences for AMR following horizontal transfer of ICE into BRD pathogens.
Horizontal gene transfer of integrative and conjugative elements (ICE) in bacterial pathogens of the bovine respiratory disease (BRD) complex has emerged as a significant cause of antimicrobial resistance (AMR) and therapeutic failure and mortalities in cattle. The aim of this study was to assess an AMR ICE occurring in Pasteurella multocida from a case of BRD, designated ICEMh1 PM22 for its structure and host genome insertion site, and to identify consequences for host fitness and antimicrobial therapy. The modular structure of ICEMh1-like elements found in several related livestock pathogens was compared to ICEMh1 PM22 , and the repertoire of cargo genes in variable ICE modules was functionally categorized. AMR genes were identified as frequent additions to the variable modules of ICEMh1-like elements. Random PCR-based mapping of ICEMh1 PM22 -genome junctions in transconjugants provided evidence that ICEMh1 PM22 integrates into the tRNA-leu for the UUG codon, and not into tRNA-leu for other codons. This was separately confirmed in the genomes of ICEMh1-likeharboring livestock pathogens. Bacterial genera harboring receptive tRNA-leu UUG were identified to establish the potential host range of ICEMh1-like elements. ICEMh1 PM22carrying transconjugants in P. multocida and Mannheimia haemolytica were less fit than isogenic strains without the ICE when grown without antimicrobial selection. This fitness cost was abrogated in the presence of subinhibitory concentrations of antimicrobials. Despite this cost, ICEMh1 PM22 was retained in transconjugants in extended culture. To identify possible therapeutic efficiencies, antimicrobial combinations were screened for synergistic interactions against AMR ICEMh1 PM22 -carrying transconjugants. No antimicrobial combination tested exhibited synergistic interactions against AMR P. multocida or M. haemolytica harboring ICEMh1 PM22 . In conclusion, this study provided information on the structural variation of ICEMh1-like elements, refined the ICE insertion site and potential host range, and demonstrated the risk and consequences for AMR following horizontal transfer of ICE into BRD pathogens.
INTRODUCTION
Integrative and conjugative elements (ICEs) are mobile genetic elements that can transfer autonomously by conjugation in bacteria. In general, ICEs can excise from the host chromosome to form a circular extrachromosomal intermediate which can be transferred and integrated into a receptive host (Johnson and Grossman, 2015). These widespread elements have increasingly been detected in bovine pathogens comprising the bacterial component of the bovine respiratory disease (BRD) complex (Michael et al., 2011b;Eidam et al., 2014). BRD bacteria are typically opportunistic pathogens which harmlessly reside in the nasopharynx, but upon exposure of cattle to stressors (e.g., weaning, transportation, feed changes) these bacteria can invade the respiratory tract and cause infection in conjunction with a variety of viral agents (Mosier, 2015). Serious BRD infections often lead to bovine pneumonia mortalities in calves. As a cause of significant economic loss to the cattle industry, BRD is typically prevented or treated with antimicrobials, including the use of infeed antimicrobial prophylaxis (Cameron and McAllister, 2016). ICEs tend to have a modular structure, comprising modules for recombination and conjugation (Johnson and Grossman, 2015), but ICEs may also carry 'cargo' gene modules that are often advantageous to the recipient cell in terms of altering virulence, metabolic, and antimicrobial resistance capabilities. The first comprehensive reports of ICEs in BRD Pasteurellaceae described ICEPmu1 and ICEMh1, in Pasteurella multocida strain 36950 (Michael et al., 2011a,b) and Mannheimia haemolytica strain 42548, respectively (Eidam et al., 2014). Recently, related ICE sequences have been described in other BRD Pasteurellaceae isolates (Klima et al., 2016;Beker et al., 2018). Of particular concern, these ICEs can harbor multiple resistance genes, including determinants capable of conferring resistance to most veterinary antimicrobials that are approved for the treatment of BRD (FDA, 2016). In addition to those Pasteurellaceae, ICE ICEPmu1-and ICEMh1-like sequences have also been detected in mortality-associated Histophilus somni (Bhatt et al., 2018), as well as in disease-associated Pasteurellaceae from swine suffering from respiratory infection (M. J. Hauglund et al., USDA, unpublished data). Furthermore, resistance genes found in these ICE are often highly related to AMR genes from other human and veterinary bacteria (Michael et al., 2011a,b;Eidam et al., 2014;Cameron et al., 2018).
With the increasing availability of BRD-associated bacterial genome sequences, it is now evident that related ICEs are widespread in the Pasteurellaceae, which differ primarily in their 'cargo' (Beker et al., 2018). ICEPmu1, the first BRD ICE characterized in P. multocida, was shown to integrate into a copy of the leucine tRNA (tRNA-leu) (Michael et al., 2011b). The chromosome-integrated form of ICEPmu1 was flanked by 13 bp direct repeats (5 -GATTTTGAATCAA-3 ), which corresponded to the attB attachment site for site-specific recombination. In the circular form of ICEPmu1, the direct repeat sequence is flanked by left and right 'attachment' sites, attL and attR, which are thought to be the recombination sites for a XerCDlike pair of tyrosine recombinases (Castillo et al., 2017). In the integrated form, the direct repeat and either attL or attR comprise the left and right terminals of the ICE. ICEMh1 also targets tRNA-leu, but is flanked by a shorter, but otherwise identical 11 bp direct repeat (5 -GATTTTGAATC-3 ) (Eidam et al., 2014). Thus, ICEPmu1 and ICEMh1 seem to have evolved from a common ancestor, and a number of genes (particularly those for recombination and conjugation) are syntenous (Eidam et al., 2014). Both ICEs also harbor two separate cargo regions named 'resistance region 1 and 'resistance region 2 because these regions characteristically harbor genes for antimicrobial resistance. ICEPmu1 has 12 resistance genes: strA and strB (conferring resistance to streptomycin), aphA1 (gentamicin), sul2 (sulfonamides), tet(H) (tetracyclines), floR (phenicols), erm(42) (macrolides and lincosamides), aadB (gentamicin), aadA15 (streptomycin and spectinomycin), bla OXA−2 (β-lactams), and msr(E) and mph(E) (macrolides) (Michael et al., 2011a,b). ICEMh1 has only 5 resistance genes: strA, strB, aphA1, sul2, tet(H) (Eidam et al., 2014). Both ICEs are fully mobilizable and proven to integrate into P. multocida, M. haemolytica, and E. coli recipients (Klima et al., 2014).
The observation that ICEPmu1 and ICEMh1 can easily move between genera and species highlights the extreme risk for rapid acquisition of AMR in BRD bacteria via horizontal gene transfer. The acquisition of these AMR genes poses an obvious threat to effective antimicrobial therapy, and could exacerbate economic losses associated with cattle mortalities. Greater knowledge of basic ICE biology is necessary to understand their potential host range, including acquisition by human pathogens, and the threat they pose to the efficacy of antimicrobial therapy. We explored these attributes using isogenic transconjugants of an ICEMh1-like element (ICEMh1 PM22 ) previously identified in a bovine isolate (P. multocida PM22) from a BRD mortality (Klima et al., 2014). The objectives of this study were to (I) refine the insertion site of ICEMh1-like elements, (II) predict host range based on the insertion site, (III) explore the effect of ICEMh1 PM22 acquisition on host fitness, and (IV) identify any synergistic antimicrobial combinations effective against bacteria harboring ICEMh1 PM22 . We tested for synergy using antimicrobials against which PM22 was both non-susceptible and susceptible to identify any combination capable of restoring susceptibility. We hypothesized that this information could be used to ensure prudent antimicrobial usage, inform future BRD mitigation strategies, and identify the potential risk in other areas of human and veterinary medicine.
Bacterial Strains and Growth Conditions
P. multocida PM22 was initially isolated following lung tissue necropsy of a beef cattle BRD mortality (acute fibrinous pneumonia) in a Texas feedlot (Klima et al., 2014). Pasteurellaceae were routinely grown on tryptic soy agar (TSA) blood agar (Dalynn Biologicals, Calgary, AB) or Brain Heart Infusion (BHI) agar (BD Difco, Mississauga, ON). E. coli were maintained with Luria-Bertani (LB) or Mueller-Hinton II (MH; cation-adjusted) agar/broth (BD Difco) supplemented with antibiotics where appropriate. All susceptibility testing was performed in MH broth. Spontaneous rifampin-resistant (Rif R ) mutants of P. multocida CCUG 17976 and M. haemolytica ATCC 33396 were obtained by plating OD 600 2.0 on BHI supplemented with 50 mg/L rifampin (rifampicin; MilliporeSigma, Oakville, ON, United States), followed by incubation for ∼48 h at 37 • C. To generate luciferase-expressing E. coli DH5α, the pAKlux2 plasmid (Karsi and Lawrence, 2007) was electroporated into E. coli DH5α and transformants were selected on and routinely maintained on LB agar supplemented with 100 mg/L of ampicillin (MilliporeSigma).
Growth Curves and Co-culture Competitions
Growth curves of Rif R P. multocida CCUG 17976 and M. haemolytica ATCC 33396 and their isogenic ICEMh1 PM22 transconjugants were performed in 200 µL volumes of MH broth inoculated at an OD 600 of 0.005 in 96-well plates (Nunc) sealed with gas-permeable film (Breathe-Easy sealing membrane, MilliporeSigma). Bacterial growth, with shaking at 37 • C, was monitored continuously for 12 h in a plate-reader (Biotek HT Synergy). For 2-strain co-culture experiments, luciferaseexpressing E. coli DH5α pAKlux2 and test strains (including DH5α without pAKlux2 as a control) were each inoculated at an OD 600 of 0.0025 (i.e., total OD 600 of ∼0.005) into black clear bottom 96-well plates (Nunc, Thermo-Fisher Scientific, Ottawa, ON, Canada). Bacterial growth was monitored as above, with the addition of luminometry. The maximum light produced (in relative light units, RLU) in each competition was recorded, and used to generate a competitive index (i.e., RLU Teststrains vs. DH5α pAKlux2 /RLU DH5α vs.DH5α pAKlux2 ). Thus, values <1 indicated that test E. coli DH5α pAKlux2 was outperformed compared to an E. coli-only control. For longterm repeated passage experiments to assess if ICEMh1 PM22 could be lost from the host strain, ICEMh1 PM22 transconjugants were inoculated into 2 mL of MH broth or MH broth supplemented with 0.5 MIC (subinhibitory for the susceptible WT) oxytetracycline (0.125 mg/L), spectinomycin (32 mg/L), or tylosin (16 mg/L) in a sterile 96-well block (Greiner Bio-One, Monroe, NC), and sub-cultured (1/100) every ∼3 d into fresh media supplemented with the same antimicrobials for 150 d. To assess for loss of ICEMh1 PM22 , CFU from OD 600 0.1-equivalents from each culture were enumerated in parallel on MH agar (total count) and MH agar supplemented with each antimicrobial at concentrations selective for ICEMh1 PM22 : oxytetracycline (10 mg/L); spectinomycin (512 mg/L); or tylosin (64 mg/L). Likewise, the effect of subinhibitory concentrations on co-culture competitions with luciferase-expressing E. coli was assessed with 0.5 MICs (for WT strains) of oxytetracycline (0.125 mg/L) and spectinomycin (32 mg/L) as above for cocultures. Tylosin was not tested because E. coli is intrinsically resistant to macrolides.
MIC Determination and Checkerboard Assay Antimicrobial Synergy Screening
Minimum inhibitory concentrations (MICs) were determined for the Rif R and isogenic ICEMh1 PM22 P. multocida CCUG 17976 and M. haemolytica ATCC 33396 transconjugants according to the CLSI approved standard M07 -Methods for dilution antimicrobial susceptibility tests for bacteria that grow aerobically (CLSI, 2018). Escherichia coli ATCC 25922 and E. faecalis ATCC 29212 were used as quality control organisms for susceptibility testing. Susceptibility designations for P. multocida PM22 were previously assigned (Klima et al., 2014) in accordance with the CLSI approved standard M31-A3 -Performance standards for antimicrobial disk and dilution susceptibility tests for bacteria collected from animals (CLSI, 2008). Antimicrobials tested were purchased from MilliporeSigma (oxytetracycline hydrochloride, chlortetracycline, tilmicosin, gamithromycin, spectinomycin dihydrochloride pentahydrate, penicillin G sodium salt, ampicillin, ceftiofur hydrochloride, sulfamethoxazole, enrofloxacin, tiamulin fumarate, clindamycin hydrochloride and florfenicol) or from AlfaAesar (Haverhill, MA; tylosin tartrate, neomycin sulfate hydrate and sulfamethazine). Briefly, antimicrobial stocks were prepared immediately prior to testing with respect to potency in the appropriate solvent and filter sterilized where appropriate. Broth microdilution assays were performed in triplicate in 96-well plates in a total volume of 100 µL of MH broth containing 2-fold dilution series of each antimicrobial and inoculated with 5 µL of each strain tested (total inoculum of ∼5 × 10 4 CFU). Plates were grown aerobically at 37 • C for 20 h and MIC was registered as the lowest concentration inhibiting visually detectable growth. For checkerboard synergy assays, two-dimensional arrays of serial concentrations (8 × 8 wells; 2-fold dilution series) of antimicrobials were created for two experiments: (I) all combinations of 5 antimicrobials (i.e., oxytetracycline, spectinomycin, tilmicosin, tylosin, and sulfamethazine) representing major antimicrobial classes to which transconjugants were non-susceptible, and (II) selected combinations of 'non-susceptible' antimicrobials (i.e., oxytetracycline, spectinomycin, and tylosin) with antimicrobials to which ICEMh1 PM22 transconjugants were susceptible (i.e., penicillin G, enrofloxacin, florfenicol, ceftiofur, and chlortetracycline). Growth and sterility controls were included in each checkerboard assay, which were performed in triplicate. MICs were interpreted by eye and OD 600 was measured in a plate reader to facilitate calculation of a fractional inhibitory concentration index (FICI). For discrepancies between replicates, the highest antimicrobial concentration determined the MIC. The FIC for each antimicrobial (A or B) was determined by the MIC of the antimicrobial in combination and divided by the MIC of the drug alone, according to: Interactions were conservatively interpreted as 'synergistic' (FICI ≤ 0.5) or 'no interaction' (FICI > 0.5-4.0) in line with synergy testing guidelines in the Journal of Antimicrobial Chemotherapy (Odds, 2003).
Bioinformatics and Statistical Analyses
Statistical tests and graphing were performed with Sigmaplot 13.0 (Systat Software Inc.), with error bars indicating standard error of the mean (SEM). ICEMh1-like-containing Pasteurellaceae genomes were identified by default BLAST alignment to ICEMh1 PM22 , using only results with >80% sequence identity and >50% query coverage and eliminating isogenic or partial sequences. Sequence manipulations and alignments were performed in Geneious 8.1.9 using default MUSCLE parameters or progressive Mauve alignment (Darling et al., 2010) for identification of conserved and variable regions (and genes) in ICEMh1-like sequences. Gene Annotations in 41 ICEMh1-like sequences were standardized by re-annotation with PROKKA (Seemann, 2014) and manually scrutinized for putative function. Trees were constructed with PhyML (Jukes-Cantor substitution model). Sequence logos were produced with WebLogo (Crooks et al., 2004). Sequence similarity between ICEMh1-like sequences was visualized with Circos (Krzywinski et al., 2009). Predicted tRNA structure and identification was performed with tRNAscan-SE (Chan and Lowe, 2019). To identify bacteria with tRNA-leu UUG containing the conserved direct repeat and palindrome sequences, a BLAST search was performed using the trnL2 (tRNA-leuUUG) sequence from M. haemolytica M42458 with parameters: Program, blastn; Word size, 11; Expect value, 10; Hitlist size, 20,000; Match/Mismatch scores, 2/-3; Gapcosts, 5/2. This resulted in 9604 hits/alignments, which were filtered for those containing the ICEMh1-associated direct repeat (i.e., attB) and palindrome (with 100% identity). From this, a taxonomic report was generated to show bacterial genera.
RESULTS
General Structure of the ICEMh1-like Variant ICEMh1 PM22 From a BRD-Associated P. multocida Isolate ICEMh1 PM22 is a 77,786 bp element in P. multocida PM22 (accession # CP045724) bound by two copies of a tRNA for leucine (tRNA-leu), one of which is disrupted and potentially non-functional (on left, Figure 1A) whereas the second copy is full-length and likely functional (on right, Figure 1A). There is also a third disrupted copy of tRNA-leu upstream of the start codon of the full-length functional tRNA-leu (not shown). The ICEMh1 PM22 sequence may encode for ∼94 proteins that have been annotated with putative functions or as conserved hypothetical proteins. Similar to the ICEMh1 prototype in M. haemolytica strain 42548 (Eidam et al., 2014) and ICEPmu1 in P. multocida strain 36950 (Michael et al., 2011b), and other Pasteurellaceae ICE (Beker et al., 2018), ICEMh1 PM22 harbors multiple antimicrobial resistance genes/gene cassettes interspersed with numerous putative transposases in two discrete 'resistance regions'. The first (left) region-resistance region 1, encodes aminoglycoside (aph(3 )-Ia, aph(6)-Id, aph(3 )-Ib), sulfonamide (sul2), and macrolide (erm42) resistance genes, and the second-resistance region 2-encodes genes for tetracycline (tetH, tetR), aminoglycoside (ant(2 )-Ia, aadA25), macrolide (mphE) resistance, and a potential extended-spectrum β-lactamase (bla OXA−2 ) previously shown to be phenotypically non-functional in transconjugants (Klima et al., 2014). This two-region structure appears to be widely present in other ICEMh1-like elements found in 41 Pasteurellaceae genome sequences, encompassing the livestock pathogens P. multocida, M. haemolytica, Histophilus somni, Bibersteinia trehalosi, and Glaesserella parasuis ( Figure 1B). Also routinely present in ICEMh1-like sequences are putative tyrosine recombinases (integrases), sometimes annotated as XerC and XerD, members of which are thought to work in tandem by mediating sitespecific recombination (i.e., insertion and excision of the ICE) using two recognition sites which separately constitute the attL and attR sites in integrated ICE, and which are found on either side of a central attachment site involved in crossover (Michael et al., 2011b;Johnson and Grossman, 2015;Castillo et al., 2017). Alignment of xerC and xerD sequences found in 41 ICEMh1-like elements indicates a high degree of similarity (alignment identity: xerC, 77.9% identity; xerD 82.4% identity). XerC and XerD are likely part of the core or conserved backbone of ICEMh1-like elements, as are genes annotated as part of the conjugation machinery, such as the tra genes (Johnson and Grossman, 2015). Most of these were usually present in the central region of ICEMh1-like elements with >95% identity. We further examined the gene content in conserved and variable regions of 41 ICEMh1-like elements by extracting gene annotations from MAUVE-aligned sequences, defining 'conserved' as alignments found in >50% of ICEMh1-like elements with >80% sequence identity. 'Conserved' alignments harbored the majority of the conjugation machinery, such as tra and conjugation-associated proteins of unknown function, and DNA topoisomerases, integrases, and chromosome segregation factors potentially involved in ICE mobilization ( Figure 1C). 'Variable' alignments contained more functionally diverse genes, including genes for antimicrobial and metal resistance, transposition, transcriptional regulation, potentially expanded metabolic functions (e.g., carbohydrate utilization and other respiratory genes), and toxin-antitoxin systems.
ICEMh1-like Elements Insert Into the tRNA-leu for the UUG Codon
It was previously shown that ICEMh1-like elements insert into tRNA-leu genes in P. multocida and M. haemolytica (Michael et al., 2011b;Eidam et al., 2014). The terminal ends of ICEMh1 (and the cognate attB genome attachment site) were identified as a direct repeat (5 -GATTCAAAATC-3 ) (Eidam et al., 2014). Given that there are multiple copies of tRNAleu in most bacterial genomes, we initially thought that any could be receptive to ICEMh1-like elements. To determine this, we mapped ICEMh1 PM22 junctions in transconjugants using degenerate (random) PCR oligonucleotides (Salama et al., 2004) with oligonucleotides specific for the terminal ends of ICEMh1 PM22 (Figure 2A). Following sequencing and alignment of PCR-amplified junctions, ICEMh1 PM22 transconjugants in P. multocida CCUG 17976 and M. haemolytica ATCC 33396 were found to have identical insertion sites, which we identified as a single tRNA-leu in each strain as shown mapped to the known genome sequences of P. multocida 36950 and M. haemolytica M42548 (Figure 2A). Furthermore, alignment of the left and right junctions in 41 ICEMh1-like-containing genomes indicated that those ICEs had exclusively inserted into tRNA-leu sequences in which both the direct repeat and an imperfect palindrome (5 -CGGTTCGAGTCCG-3 ) were present ( Figure 2B). To characterize this further, we examined all the tRNA-leu present in the 41 ICEMh1-like-containing genomes and found that these sequences could be categorized by multiple alignment phylogeny into four types in both P. multocida and M. haemolytica; representing 4 of the 6 possible anticodons for leucine ( Figure 2C). This was consistent with data from the GtRNAdb tRNA databases (Chan and Lowe, 2015) which indicates that these bacteria typically encode at least 4 tRNAs for the leucine codons CUA, CUC, UUA, and UUG. The other two tRNA-leu codons, CUG and CUU, are absent in currently known P. multocida and M. haemolytica genomes. Only tRNA-leu UUG was integrated with ICEMh1-like elements in the 41 genomes studied. The predicted structure of tRNA-leu UUG suggests that the ICE direct repeat constitutes the tRNA anticodon loop, and the palindrome is the basis of the T-loop (or TψC loop) ribosome recognition site (Figure 2D). Our analysis of ICEMh1 PM22 junctions also suggested that ICE insertion leaves one functional and one partial (disrupted) copy of tRNA-leu UUG (not shown).
In the 41 ICEMh1-like-containing genomes, at least 1 functional copy of each tRNA was present, although sometimes full-length tRNA-leu UUG was present twice (Figure 1E). In all cases where two functional copies of tRNA-leu UUG were present, the copies were located directly on either side of the ICE insertion. We also observed that one terminus (i.e., direct repeat) of the ICE was absent in some ICEMh1-like-containing genomes, potentially indicating ICEs incapable of mobilization (not shown).
Predicted Host Range of ICEMh1-like Elements
Given the conservation of the tRNA-leu UUG direct repeat and palindrome in the terminal ends of ICEMh1-like elements, we used these as criteria for in silico prediction of potential host range. The full-length M. haemolytica tRNA-leu UUG sequence was used as the basis for a blastn nr database search optimized for short input sequences. Following retrieval of 9,604 BLAST alignments, tRNA-leu containing the exact direct repeat and palindrome sequence were filtered, and a taxonomic report showing bacterial genera with members with those attributes was constructed ( Figure 2F). As expected, the phylum with the greatest number of genera harboring similar tRNA-leu were the Proteobacteria, specifically γ-Proteobacteria, of which Pasteurellales were most abundant (an Order which includes P. multocida and M. haemolytica, and some human pathogens, including Haemophilus spp.). Enterobacterales were also frequently represented, and included many well-known food-borne and cattle-associated pathogens such as Escherichia coli and Salmonella spp. For E. coli, only some strains harbored the receptive tRNA-leu sequence, suggesting that only some members of any genus might be capable of transconjugation. Burkholderiales (β-proteobacteria) also harbored similar tRNAleu, suggesting the potential for ICEMh1-like horizontal gene transfer to Burkholderia spp. and Bordetella spp. Likewise, bacteria from α-, δ-, and ε-Proteobacteria were also represented and may be receptive to ICEMh1, such that cattle-associated food-borne bacteria like Campylobacter could conjugate with ICE-bearing Pasteurellaceae. Some Gram-positive Firmicutes, specifically Clostridia, were also host to ICE-receptive tRNA-leu. Thus, if the tRNA-leu UUG sequence was the only requirement for mobilization and integration, ICEMh1 might be capable of crossing into these Gram-positive organisms. At present, it has only been experimentally confirmed that ICEMh1 PM22 can transfer into Rif R P. multocida and M. haemolytica (Klima et al., 2014), and E. coli strains K-12 and DH5α (not shown).
ICEMh1 PM22 Acquisition Confers a Fitness Cost in Laboratory Conditions
M. haemolytica ATCC 33396 transconjugants harboring ICEMh1 PM22 were noticeably slower-growing than the isogenic WT on agar media supplemented with selective concentrations of antimicrobials. To assess the effect of ICE acquisition on growth parameters, we performed OD 600 -based growth curves for Rif R P. multocida CCUG 17976 and M. haemolytica ATCC 33396, and their respective ICEMh1 PM22 transconjugants. The transconjugant of Rif R P. multocida CCUG 17976 exhibited both an extended lag phase and decreased growth rate compared to the ICE-minus strain ( Figure 3A). Likewise, the transconjugant of the Rif R M. haemolytica ATCC 33396 strain exhibited an extended lag phase, but did not have a significantly different growth rate (Figure 3B). To further explore the fitness costs of ICEMh1 PM22 integration, we competed the isogenic Rif R WT and transconjugants against E. coli DH5α expressing luciferase, and monitored light production to assess for E. coli fitness against all four strains ( Figure 3C). In all cases, the Pasteurellaceae inhibited E. coli light production. However, competitions with strains harboring ICEMh1 PM22 resulted in increased light production relative to the isogenic WT strains, also suggesting that ICEMh1 PM22 confers a fitness cost on the host.
ICEMh1 PM22 Was Retained in Transconjugants Following Repeated Laboratory Passage
Given the apparent cost of ICEMh1 PM22 carriage, we hypothesized that hosts might lose the ICE following extended passage in non-selective conditions (i.e., without antimicrobials). To test this, the ICEMh1 PM22 transconjugants for Rif R P. multocida CCUG 17976 and M. haemolytica ATCC 33396 were repeatedly passaged over 150 days in media without antimicrobials or supplemented with sub-inhibitory concentrations of oxytetracycline, spectinomycin, or tylosin. The chosen concentrations were sub-inhibitory (0.5 MIC) to the parental WT strains. Every 30 days, CFU from each repeated passage were enumerated on MH agar (total CFU) and MH agar supplemented with each antimicrobial at a selective concentration (i.e., at a concentration where nonsusceptibility was conferred by the ICEMh1 PM22 ). The presence of ICEMh1 PM22 was also confirmed in selected isolated colonies by PCR (not shown). This showed that repeated passage of ICEMh1 PM22 -carrying strains did not result in any measurable loss of the ICE (Figure 3D). In all cases there was no statistical difference between total and non-susceptible CFU counts in any repeated culture experiment. Thus, despite the fitness cost, ICEMh1 PM22 was retained in the absence of selection in these experiments.
The Presence of ICEMh1 PM22 Altered Collective Resistance to Sub-Inhibitory Antimicrobial Concentrations
Given that previous work had shown that ICEMh1 PM22 could transfer into E. coli, we attempted to monitor transconjugation dynamics using luciferase-expressing E. coli in co-culture with Rif R WT or ICEMh1 PM22 transconjugants under antimicrobial pressure. We thought that increased luciferase production might indicate conjugation events. Therefore, oxytetracycline (0.125 mg/L) and spectinomycin (32 mg/L) were selected as bacteriostatic antimicrobials and used at 0.5 MIC for both P. multocida and M. haemolytica Rif R WT. At these concentrations, oxytetracycline was sub-inhibitory to E. coli DH5α pAKlux2 (MIC: ∼1 mg/L), but spectinomycin was inhibitory (MIC: ∼16 mg/L). Co-cultures of the transconjugants or Rif R WT with E. coli DH5α pAKlux2 were monitored for OD 600 and luciferase production over 12 h. For oxytetracycline, increased luciferase production was observed for both P. multocida and M. haemolytica ICEMh1 PM22 co-cultures compared to the ICE-minus Rif R WT (Figure 3E, left). For spectinomycin, luciferase production was observed only in co-cultures of E. coli and the ICEMh1 PM22 P. multocida transconjugant (Figure 3E, right). However, increased luciferase production was not due to E. coli acquisition of ICEMh1 PM22 , as non-susceptible E. coli could not be recovered on media supplemented with either tetracycline or spectinomycin at concentrations known to be selective for ICEMh1 PM22 E. coli transconjugants (not shown). These findings suggest that the presence of ICEMh1 PM22 negatively affected the efficacy of these antimicrobials against E. coli in co-culture, without horizontal gene transfer.
Antimicrobials Used to Treat BRD Exhibit
No Synergistic Interactions Against ICEMh1 PM22 -Bearing P. multocida and M. haemolytica Lastly, because ICEMh1 PM22 harbors an extensive repertoire of antimicrobial resistance genes for drugs in frequent and common use for BRD, we investigated whether or not combination antimicrobial therapies were effective against P. multocida or M. haemolytica ICEMh1 PM22 transconjugants. Checkerboard antimicrobial synergy assays ( Figure 3F) were performed to test for (I) synergism between combinations of two antimicrobials against which ICEMh1 PM22 confers non-susceptibility (i.e., oxytetracycline, spectinomycin, tilmicosin, tylosin, and sulfamethazine; Table 1), and (II) synergism between selected combinations of antimicrobials against which ICEMh1 PM22 confers non-susceptibility (i.e., oxytetracycline, spectinomycin, and tylosin) and those that it does not (i.e., penicillin G, enrofloxacin, florfenicol, ceftiofur, and chlortetracycline). Following calculation and strict interpretation of the FICI scores for each combination, we found that no pairwise combination of oxytetracycline, spectinomycin, tilmicosin, tylosin, and sulfamethazine demonstrated synergism ( Table 2). Likewise, no combination of a non-effective antimicrobial with any effective antimicrobial tested exhibited synergy (Table 3), and when present, susceptibility was due to the action of only the effective antimicrobial.
DISCUSSION
Many ICEs site-specifically insert into tRNAs and, in general, are not able to change their location within the same host cell (Burrus et al., 2002;Michael et al., 2011b). Previous research has proven that ICEPmu1 and other ICEMh1-like elements integrate into identical or near identical tRNA-leu (Michael et al., 2011b;Eidam et al., 2014;Beker et al., 2018;Bhatt et al., 2018). Here, we confirmed these findings, and provided further explanation for the tRNA-leu UUG integration specificity of ICEMh1-like elements. The ICEMh1-like direct repeat is highly conserved because it is the basis for the tRNA-leu UUG anticodon loop. Likewise, the attL site is conserved because it forms the tRNA's TψC loop. Unexpectedly, the attR site does not conserve the tRNA's D arm and loop. We speculate that this is the reason that integration seems to result in one functional and one disrupted tRNA-leu UUG as the terminus with attL and the attL-associated direct repeat maintains the complete tRNA, but the attR terminus does not complete the second tRNA fragment. Examination of several genomes with ICEMh1-like elements suggests this is the most frequent outcome of ICE insertion, but there are also examples where the ICE is flanked by complete tRNA-leu UUG , as well as ICE insertions where a terminus (i.e., the direct repeat) cannot be identified. We speculate that these represent ongoing recombination post-integration, and may be mechanisms through which ICEs become non-mobile, or acquire new genes from the host chromosome. The highly conserved structural features of tRNAs evidently provide an excellent site for exploitation by ICEs and other mobile genetic elements. Many ICEs from different bacterial species integrate into various tRNAs (Burrus et al., 2002;Johnson and Grossman, 2015;Castillo et al., 2017). There are several possible advantages to this. Firstly, all known life uses tRNAs, and organisms encode multiple tRNAs for specific amino acids (i.e., the basis of redundancy in the genetic code). Although specific tRNAs may be absent or altered, they are still widely present in multiple bacterial species and provide for a broad host range. Secondly, tRNAs have redundancy, so ICE insertion into a tRNA need not be lethal even if the tRNA is disrupted. This would facilitate the transmission of the mobile genetic element without exerting a deleterious effect on the host, which might be expected if the insertion site was a highly conserved and a non-redundant housekeeping gene. However, although single tRNA deletions may have no appreciable phenotypic effects under certain conditions, some tRNAs for a particular amino acid are known to contribute differently to cellular fitness in challenging conditions (Bloom-Ackermann et al., 2014). Thirdly, the structural features of tRNAs provide for conserved recombination sites. Although relatively short sequences (<90 bp), tRNAs contain multiple stretches of complementary base pairs that provide tertiary structure, enabling interactions with amino acids, mRNA, and the ribosome. In the case of ICEMh1-like elements and tRNAleu UUG , the 11 bp that constitute the ICE's direct repeat and the tRNA's anticodon loop were conserved with 100% identity in all ICEMh1-like-containing genomes. Exploiting the anticodon loop in this manner also likely enables a broader host range compared to more variable features of the tRNA. Likewise, in the case of tRNA-leu UUG , we speculate that other conserved bp in the tRNA provide for the attL and attR recognition and/or recombination sites of the putative XerCD tyrosine recombinases. Although we did not explore that possibility here, this would be consistent with known XerCD mechanisms (Castillo et al., 2017).
It is difficult to accurately assess the potential host range of ICEMh1-like elements. Here, we identified bacterial genera potentially receptive to the ICE based solely on the presence tRNA-leu, further refined by eliminating hits that did not contain the exact direct repeat and palindrome conserved at the attL terminus. As expected, related Pasteurellaceae and γ-Proteobacteria were most frequently represented, but genera from all Proteobacteria classes contained a receptive tRNAleu. Most Gram-positive organisms may not be receptive to ICEMh1-like elements, but some Bacilli and Clostridia did harbor a receptive tRNA-leu. We attempted to transfer ICEMh1 PM22 to Clostridium difficile, but did not successfully recover transconjugants (not shown). The host range of other conjugative elements in nature has been shown to be narrower than that observed in in vitro experiments (Rice, 1998). There are also other factors that could prevent ICEMh1-like element transmission to bacteria, despite the presence of a receptive tRNA-leu. These may include phage (Lin et al., 2011), certain metabolites (Cabezón et al., 2017), and CRISPR-Cas systems (Zhang et al., 2013). Furthermore, ICEs transfer to recipients relatively infrequently, remaining silent and integrated, with excision and horizontal transmission occurring at low frequencies within the population (Delavat et al., 2017). The putative host range of ICEMh1-like elements identified here includes many pathogens capable of causing significant human and veterinary disease. The transfer of AMR determinants into such bacteria could significantly compromise antimicrobial therapy.
Despite a fitness cost in the absence of antimicrobial selection, we also found that ICEMh1 PM22 transconjugants do not lose the ICE after long-term passage. This is likely due to 'addiction' via the action of toxin-antitoxin genes common to most ICE (Harms et al., 2018). For practical purposes this implies that the acquisition of an AMR-carrying ICE is not reversible at the single-cell level. Given that ICEMh1like elements may have a fitness cost to the host cell in the absence of antimicrobials, elimination of antimicrobial usage could help non-carriers to outcompete hosts. However, ICEMh1like elements frequently contain metal-resistance and other genes conferring benefits under dynamic selective pressures in vivo, so elimination of antimicrobial pressure may not be effective at dampening ICE-carrying populations. We also used ICEMh1 PM22 transconjugants to test for synergistic combinations of antimicrobials, hoping to identify combinations that might be effective against AMR BRD pathogens, potentially reducing antimicrobial usage and restoring drug effectiveness. Ultimately, we did not detect any synergistic interactions between the drugs tested using these organisms. This was not unexpected, and illustrates the relatively limited therapeutic options available to veterinarians for BRD treatment. The presence of multiple AMR determinants conferring resistance to most drug classes in these elements also underscores the dwindling effectiveness of antimicrobials. In conclusion, our results suggest that there is no easy solution to the emerging problem of AMR ICE in BRD pathogens using current antimicrobials and usage practices.
DATA AVAILABILITY STATEMENT
The PM22 sequence was deposited in GenBank under accession number CP045724.
AUTHOR CONTRIBUTIONS
AC and TM: conceptualization. AC and RZ: methodology, software, validation, formal analysis, and data curation. | 8,145.2 | 2019-11-12T00:00:00.000 | [
"Biology"
] |
Pseudo-Labeling Optimization Based Ensemble Semi-Supervised Soft Sensor in the Process Industry
Nowadays, soft sensor techniques have become promising solutions for enabling real-time estimation of difficult-to-measure quality variables in industrial processes. However, labeled data are often scarce in many real-world applications, which poses a significant challenge when building accurate soft sensor models. Therefore, this paper proposes a novel semi-supervised soft sensor method, referred to as ensemble semi-supervised negative correlation learning extreme learning machine (EnSSNCLELM), for industrial processes with limited labeled data. First, an improved supervised regression algorithm called NCLELM is developed, by integrating the philosophy of negative correlation learning into extreme learning machine (ELM). Then, with NCLELM as the base learning technique, a multi-learner pseudo-labeling optimization approach is proposed, by converting the estimation of pseudo labels as an explicit optimization problem, in order to obtain high-confidence pseudo-labeled data. Furthermore, a set of diverse semi-supervised NCLELM models (SSNCLELM) are developed from different enlarged labeled sets, which are obtained by combining the labeled and pseudo-labeled training data. Finally, those SSNCLELM models whose prediction accuracies were not worse than their supervised counterparts were combined using a stacking strategy. The proposed method can not only exploit both labeled and unlabeled data, but also combine the merits of semi-supervised and ensemble learning paradigms, thereby providing superior predictions over traditional supervised and semi-supervised soft sensor methods. The effectiveness and superiority of the proposed method were demonstrated through two chemical applications.
Introduction
Modern industrial processes are equipped with a large number of measurement devices, in order to allow the implementation of advanced monitoring, optimization, and control of the production process. However, many crucial quality variables in industrial process are difficult to measure online, due to the lack of reliable hardware sensors or the high investment in the purchase and maintenance of apparatuses. To tackle this problem, soft sensor technology, as a promising indirect measurement tool, has been proposed, to enable real-time estimations of difficult-to-measure process variables [1,2]. The basis of a soft sensor is to build a mathematical model describing the relationship between the difficult-to-measure target variable and the easy-to-measure secondary variables, and then perform online estimation for the query data, based on the built predictive model. Generally, soft sensors can be divided into two categories: first principle, and data-driven methods. The former method type requires deep physical and chemical knowledge, which is often impossible in many real-world applications. Alternatively, data-driven methods, assumptions have been introduced to build generative soft sensor models, such as a mixture semi-supervised probabilistic PCR model [23], semi-supervised Gaussian mixture regression [24], semi-supervised Dirichlet process mixture of Gaussians [25], semi-supervised mixture of latent factor analysis models [26], and Student's-t mixture regression [27]. Overall, the key to building an accurate generative model lies in accurate model assumptions, which are often difficult to determine without sufficient reliable domain knowledge.
(2) Graph based methods: Methods of this type are based on manifold assumptions and require constructing a semi-labeled graph to assure the label smoothness over the graph. To this end, one needs to define a graph where the nodes denote labeled and unlabeled samples, and where the edges connect two nodes if their corresponding samples are highly similar. One typical example of such method is the label propagation method [28], originally proposed for addressing classification problems. As for graph based SSL soft sensors, it is a common practice to embed the graph based regularization into the cost function based on traditional supervised regression techniques. For example, two semi-supervised soft sensors were developed, by integrating the extreme learning machine (ELM) method and the graph Laplacian regularization into a unified modeling framework for industrial Mooney viscosity prediction [29,30]. Similarly, Yan et al. [31] developed a semi-supervised Gaussian process regression (GPR) for quality prediction, by using a semi-supervised covariance function, which was defined by introducing the manifold information into the traditional covariance function. Moreover, to enhance the model performance for handling complex process characteristics, as well as exploiting unlabeled process data, Yao and Ge [32] proposed a semi-supervised deep learning model for soft sensor development. First, it implements unsupervised feature extraction through an autoencoder with a deep network structure. Then, ELM is utilized for regression, by introducing manifold regularization. In addition, Yan et al. [33] proposed a semi-supervised deep neural regression network with manifold embedding for soft sensor modeling. (3) Representation learning based methods: A general strategy for these methods is to use unlabeled data to assist in extracting abstract latent features of the input data. The most common techniques for this purpose are deep learning methods [34], such as convolutional neural networks, deep belief networks (DBN), long/short-term memory neural networks, and a large variety of autoencoders. Besides its strong representation ability, deep learning is inherently semi-supervised and, thus, can effectively exploit all available process data. As an early attempt, Shang et al. [35] employed a DBN to build soft sensors for estimating the heavy diesel 95% cut-point of a crude distillation unit (CDU). However, traditional representation techniques are mainly implemented in an unsupervised manner, where the output variable information is ignored. To address this issue, several research works have focused on exploring semi-supervised representation learning techniques. For instance, Yan et al. [36] proposed a deep relevant representation learning approach based on a stacked autoencoder, which conducts a mutual information analysis between the representations and the output variable in each layer. Similar research has also been reported in references [37][38][39]. (4) Self-labeled methods [40]: The core of such methods is extending the labeled training set by adding high-confidence pseudo-labeled data. In such a modeling framework, one or more predictive models are first trained with labeled data only, and then refined using the extended labeled set through iterative learning. Two representative examples are self-training [41], and co-training [42], based on which some variants have also been proposed, such as COREG [43], Tri-training [44], Multi-train [45], and CoForest [46]. As an instantiation of self-training, a semi-supervised support vector regression model was proposed and verified using 30 regression datasets and an industrial semiconductor manufacturing dataset [47]. In the method, the label distribution of the unlabeled data is estimated with two probabilistic local reconstruction models, thus providing the labeling confidence. Differently, based on the co-training paradigm, Bao et al. [48] proposed a co-training partial least squares (PLS) method for semi-supervised soft sensor development, by splitting the total process variables into two different parts, serving as two views. Instead of using single-output regression techniques, four semi-supervised multiple-output learning soft sensor models [49] were developed. In addition, by applying a spatial view, a temporal view, and a transformed view together, a multi-view transfer semi-supervised regression was developed, by combining transfer learning and co-training for air quality prediction [50]. Another example is a co-training style semi-supervised artificial neural network model for thermal conductivity prediction based on a disagreement-based semi-supervised learning principle [51]. The core of this method is constructing two artificial neural networks learners with different architectures, to label the unlabeled samples. Despite the simplicity and flexibility of implementation, the success of self-labeled SSL soft sensors heavily depends on the reliable confidence estimation of pseudo-labeled data.
According to the type of unlabeled data utilized, the above-mentioned semi-supervised modeling methods can be categorized into two groups: i.e., regularization embedding and pseudo labeling based approaches. The former methods aim to improve the model training, while the latter ones seek to enhance the model generalization capability by enlarging the labeled training data using high-confidence pseudo-labeled data. Compared to the regularization based SSL methods, pseudo labeling based ones have the merits of easy implementation and flexible wrapping with any base learning technique. Thus, in this work, we focus on pseudo-labeling semi-supervised soft sensor development. However, traditional pseudo-labeling SSL methods often encounter several drawbacks. One particular problem is the difficulty in defining confidence evaluation criterion. The most commonly used strategy is to evaluate the improvement rate of prediction performance after introducing the pseudo-labeled data [43]. Unfortunately, using such criterion cannot effectively characterize the complex hypothesis behind the labeled and unlabeled data. Another problem is that traditional pseudo-labeling strategies, such as self-training and co-training are prone to resulting in error propagation and accumulation along with the iterative learning. Thus, it is appealing to investigate the new paradigm of pseudo-labeling, to achieve reliable pseudo-label estimation.
In addition to augmenting the labeled training data, ensemble learning also plays a crucial role in enhancing the prediction performance of soft sensors [52][53][54][55]. Usually, ensemble methods can produce a strong ensemble model that is significantly more accurate than a single learner [56]. Although semi-supervised and ensemble learning approaches are two distinctive learning paradigms, they can be complementary to each other with efficient combination. On the one hand, the introduction of ensemble learning into SSL allows avoiding the difficulty in selecting model parameters and overcoming the drawback of single predictive model; thus, reducing the modeling uncertainty and improving the prediction reliability. On the other hand, the utilization of unlabeled data is helpful for improving the accuracy and diversity of base models, which is of great importance for constructing highperformance ensemble models. Therefore, the effective combination of semi-supervised and ensemble learning has attracted much attention from researchers on soft sensor development. For example, some research attempted to combine multiple homogeneous or heterogeneous base learners to construct semi-supervised ensemble models within the self-labeled learning frameworks (e.g., self-training and co-training) [50,51,57]. In addition, some scholars used unlabeled data to enhance the diversity of base models, thus improving the ensemble performance [29,58,59]. Moreover, Shao and Tian [60] proposed a soft sensor method based on a semi-supervised selective ensemble learning strategy. This method uses abundant unlabeled samples to help achieve reliable process state partition. These examples show that the combination of semi-supervised and ensemble learning is expected to improve the performance of a soft sensor model. In light of the above-mentioned problems, in this work, a new semi-supervised soft sensor modeling method namely EnSSNCLELM was developed for high-quality prediction of industrial processes where the labeled data are limited but the unlabeled data are rich. The feasibility and superiority of the proposed method have been validated through application to a simulated fed-batch penicillin fermentation process and an industrial chlortetracycline fermentation process. Overall, the main contributions of this paper are threefold: (1) An improved supervised regression model NCLELM was developed and serves as the base learning technique for the proposed EnSSNCLELM modeling framework. Despite the fast training speed, ELM is prone to deliver unstable predictions, due to the random assignments of input weights and biases. By introducing the ensemble strategy of negative correlation learning into ELM, the NCLELM algorithm allows explicitly increasing the diversity among the base ELM models and, thus, enhancing the prediction accuracy and reliability. (2) A multi-learner pseudo-labeling optimization (MLPLO) approach is proposed to achieve pseudo-label estimation. Differently from traditional self-labeling techniques such as self-training and co-training, the MLPLO method attempts to build an explicit optimization problem with the unknown labels of the unlabeled data as the decision variables. Meanwhile, by exploring the inherent connections between labeled and unlabeled data, the individual and collaborative prediction performance of multiple learners are defined and integrated as the optimization objective. Then, an evolutionary optimization approach is adopted to solve the formulated pseudo-labeling optimization problem (PLOP), so as to obtain high-confidence pseudo-labeled samples for expanding the labeled set. A significant advantage of MLPLO is its strong capability for avoiding the error propagation and accumulation found with commonly used iterative learning. (3) By effectively combining a MLPLO strategy with ensemble modeling, the proposed EnSSNCLELM soft sensor method allows achieving the complementary advantages of semi-supervised and ensemble learning. On the one hand, semi-supervised learning is helpful for enhancing the accuracy and diversity of ensemble members by providing different high-confidence pseudo-labeled sets. On the other hand, combining multiple semi-supervised models using ensemble methods makes it possible to fully utilize the information of unlabeled data and reduce the modeling uncertainty caused by sub-optimal parameter settings and data selection.
The rest of the paper is organized as follows. Section 2 provides a brief introduction of ELM modeling and negative correlation learning (NCL). The proposed NCLELM and EnSSNCLELM soft sensor methods are elaborated in Section 3. Section 4 reports two case studies, to demonstrate the feasibility and efficiency of the proposed approach. Finally, conclusions are drawn in Section 5.
Extreme Learning Machine
ELM [61] is a computationally efficient learning method for training single-hidden layer feedback networks (SLFNs). Unlike traditional gradient-based iterative learning, the input weights and hidden node biases of a ELM model are randomly assigned, and then the output weights of SLFNs can be obtained analytically, which enables extremely fast training of ELM.
Given a training set D = {X, y} with X = {x i } L i=1 and y l = {y i } L i=1 representing input and output data, respectively, where L is the number of training samples, and x i ∈ R d , y i ∈ R 1 . Then, a standard ELM model with N node hidden nodes and the activation function g(·) can be mathematically modeled as where w i = [w i,1 , w i,2 , . . . , w i,d ] T is the input weight vector that connects the ith hidden node and the input nodes, β i denotes the output weight that connects the ith hidden node and the output node, and b i represents the bias of the ith hidden node. While numerous activation functions have been defined, a simple sigmoidal function g(x) = 1/(1 + exp(−x)) is adopted in this study.
To train an ELM model, the fitting errors on the training data are expected to be zeros. In other words, the parameters, i.e., β i , w i , and b i , must satisfy the following equation: where Since the input weights and the hidden layer biases have been randomly generated, training an ELM model is equivalent to finding a least-squares solutionβ for the linear system Hβ = y:β where H † is the Moore-Penrose generalized inverse of the hidden layer output matrix H.
Negative Correlation Learning
NCL [62] is a learning approach originally proposed for neural network ensembles. It introduces a correlation penalty term into the cost function of each individual network in the ensemble, thus allowing explicitly maximizing the diversity among the base models. In this way, NCL can produce individual networks interactively on the same training set and the training errors of these models tend to be negatively correlated. Given a training set {x i , y i } L i=1 , NCL combines N neural networks f n (x i ) to form an ensemble model: To train network f n , the cost function e n for network n is defined by: where λ ≥ 0 is a weighting parameter on the penalty term p n : where the first term is the empirical training error of the network, while the second is a correlation penalty term.
During the training process, as shown in Equations (7) and (8), all the individual networks interact with each other through their penalty terms in the cost function. Each network aims to minimize the difference between f n (x i ) and y i , whereas the difference between f ens (x i ) and f n (x i ) is maximized. In other words, NCL considers training errors from all other networks while training a network. The parameter λ controls the trade-off between the training error term and the penalty term. In the extreme case of λ = 0, all individual networks are trained independently. With the increase of λ, more and more attention is paid to minimizing the correlation-based penalty.
Proposed NCLELM and EnSSNCLELM Soft Sensor Methods
To handle the issue of label scarcity for quality prediction in the industrial process, this work aims to leverage both labeled and unlabeled data, to improve the inferential performance of soft sensors, by combining semi-supervised and ensemble learning. For this purpose, we propose NCLELM and EnSSNCLELM soft sensor methods, which are described in detail in the following sections.
NCLELM
In recent years, ELM has gained growing popularity in soft sensor applications, due to its fast learning speed and good generalization performance [63][64][65][66]. However, ELM often produces unstable predictions, due to the uncertainties caused by the random assignments of input weights and biases in the learning process. A popular approach to this problem is to utilize an ensemble strategy to improve the stability and accuracy of the ELM model.
Since the NCL learning approach has been proven effective for enhancing the ensemble performance by explicitly maximizing the diversity among base models [67][68][69], we derived the NCLELM algorithm by introducing NCL into ELM learning. NCLELM aims to encourage the diversity among component ELM models, by reducing the correlation among their outputs, and, thus, gaining an overall good ensemble accuracy. Thanks to the fast training speed of ELM, NCLELM can also be trained very efficiently, using the multiple ELM models that are included in the learning. Subsequently, we present the proposed NCLELM method in detail.
Given a training set D = {x i , y i } L i=1 and an ensemble of ELM models containing N ELM individuals, the error function of the nth individual is expressed as follows: where H n and β n denote the hidden-layer output matrix and output-layer weight vector for the nth individual ELM model, respectively, and y = [y 1 , y 2 , . . . y L ] T is a column vector. λ is a trade-off parameter on the correlation penalty term. f ens denotes the simple average of outputs from N ELM individuals: Then, the solution to the quadratic optimization problem in Equation (9) can be obtained by setting (∂e n /(∂β n )) = 0, that is By substituting Equation (10) into Equation.
(11), we can get Furthermore, Equation (12) is transformed as By applying Equation (13) to all individual errors e n and all output weights β n , and let and Then the overall optimization of all individual ELM models is achieved by solving a linear system, as follows: Finally, the solution to β ens can be analytically obtained aŝ
EnSSNCLELM
The proposed EnSSNCLELM method can be split into three steps: (i) obtaining high-confidence pseudo-labeled data, (ii) building diverse semi-supervised models, and (iii) combing diverse semi-supervised models.
Formulating the Pseudo-Labeling Optimization Problem
Traditionally, the pseudo labels for the unlabeled data are estimated through selflabeling techniques, such as self-training and co-training. However, such strategies are essentially an implicit optimization, and they often encounter error accumulation and propagation. In addition, it is difficult to evaluate the confidence of pseudo labels effectively. Thus, in our recent study [70], a single-learner pseudo-labeling optimization (SLPLO) scheme was proposed, to achieve reliable estimations of pseudo labels in an explicit way. Nevertheless, one particular drawback of this approach is that only utilizing one single learner to evaluate the goodness of pseudo labels may lead to a high risk. By contrast, multiple learners can provide diverse views and insights, which are preferable to obtain a reliable confidence evaluation.
Therefore, in this work, we propose a multi-learner pseudo-labeling optimization (MLPLO) approach, aiming to produce more reliable pseudo labels than the SLPLO scheme. The task of MLPLO is to explicitly approximate the unknown labels for given unlabeled data, by relying on a group of diverse learners to explore the available information behind the labeled and unlabeled data.
As preliminary steps, the decision variables, optimization objectives, and constraints should be determined in advance; thus, formulating the pseudo-labeling optimization problem. Given a labeled training set D l = X l , y l , where X l = x l i L i=1 and y l = y l i L i=1 are the input and output data, respectively, and L represents the number of labeled samples. Meanwhile, denote X u = x u i U i=1 as the input data for the unlabeled dataset D u , where U is the number of unlabeled samples. In addition, let y u = y u i U i=1 be the pseudo labels of X u and D pl = {X u , y u } be the pseudo-labeled set, where y u are actually unknown in practical applications and remain to be estimated. Hence, y u is determined as the decision vector for the proposed MLPLO problem (MLPLOP).
Subsequently, the objective function should be defined to evaluate the quality of the candidate solutions. For the MLPLO approach, it is necessary to build multiple NCLELM models with high diversity, when considering NCLELM as the base learning technique. This can be achieved by repeating the random generation of input weights and biases, Sensors 2021, 21, 8471 9 of 31 thereby producing diverse NCLELM models H = {NCLELM 1 , ···, NCLELM M }. It is worth noting that, during the MLPLO process, the initial configurations of input weights and biases will remain unchanged, though the output weights will be updated with the change of training set.
Within the multi-learner PLO scenario, the key to defining appropriate optimization objective lies in analyzing the individual and collaborative prediction capability of base models on the labeled and pseudo-labeled data. Specifically, four evaluation criteria are defined; including the individual accuracy using the pseudo-labeled data, individual accuracy improvement after including the pseudo-labeled data, smoothness of labeled and pseudolabeled data, and ensemble accuracy of multiple learners using the pseudo-labeled data.
Suppose there is a pseudo-labeled set D pl = {X u , y u } that needs to be optimized, we randomly divide D pl into M equal-sized training subsets D pl = D pl 1 , · · · , D pl M and then feed these to the built NCLELM models, respectively, where D with P denoting the number of samples in the ith subset. The goal of such data partition is to promote the diversity of NCLELM models. Next, the defined evaluation criteria are discussed as follows.
(1) Individual accuracy using the pseudo-labeled data. It is well known that successful data-driven modeling greatly relies on the assumption that modeling data are independent and identically distributed. That is to say, good prediction performance on unseen samples can be attained only when the test and training data come from the same distribution. This assumption usually applies to developing data-based models for industrial processes. Thus, in the context of semi-supervised modeling, we assume that the labeled and unlabeled data are drawn from the same distribution. Intuitively, this implies that a NCLELM model trained with the pseudo-labeled data can also provide accurate predictions on the labeled set, if the pseudo labels are estimated well enough. Specifically, suppose H pl = h pl 1 , h pl 2 , · · · , h pl M denote the diverse NCLELM models learned from the pseudo-labeled subsets D pl 1 , D pl 2 , · · · , D pl M , respectively. It is obvious that we expect the performance of h pl i on D l to be good if the acquired D pl i has high quality. Furthermore, by considering all individual accuracies simultaneously, the overall accuracy is minimized: whereŷ pl i,j represents the predicted label of the jth labeled sample using h pl i , and y l j is the jth actual label.
(2) Individual accuracy improvement after including the pseudo-labeled data. In many self-labeling semi-supervised learning algorithms [48][49][50], the pseudo labels are usually estimated from the already built predictive models. Then, the confidence of these pseudo-labeled data is evaluated according to the prediction accuracy enhancement of the model after adding the target pseudo-labeled data to the original training set. The larger the performance improvement, the higher the confidence of the pseudo-labeled data. Similarly, we also employ this way to evaluate the confidence of the optimized M , respectively. Then, it is desirable to minimize the prediction errors of H l+pl on the labeled training set: (18) and (19) focus on evaluating the characteristics of pseudo-labeled subsets, but do not consider the overall confidence of all pseudo labels. According to the smoothness assumption [16,17], similar inputs will lead to similar outputs. Obviously, this assumption should also hold true for the mixed data of labeled and pseudo-labeled data, if we can obtain high-confidence pseudo labels. A popular approach using this idea, is introducing a regularization term to the cost function in semi-supervised learning, e.g., semi-supervised ELM [29] and semi-supervised deep learning [33]. In this way, the information behind the unlabeled data can be utilized in model training, to avoid overfitting. Thus, in our proposed MLPLO approach, we introduce Laplace regularization to ensure the smoothness of the labeled and pseudo-labeled data during the optimization process. After mixing the labeled samples D l and pseudo-labeled samples D pl , a graph based regularization term, called smoothness objective, is defined and expected to be minimized: where y denotes the outputs of the labeled and pseudo-labeled sets, i.e., y = [y 1 , y 2 , ···, y L , y u,1 , y u,2 , ···, y u,U ], and L represents a graph Laplace matrix with (L + U) × (L + U) dimensions, which can be calculated from L = D − W. D is a diagonal matrix with the elements determined as follows: where w ij ∈ W represents the connection weight between two nodes x i and x j in the graph model. Usually, w ij can be calculated by (4) Ensemble accuracy of multiple learners using the pseudo-labeled data. In addition to the smoothness objective, the collaborative confidence evaluation can also be achieved through the ensemble prediction performance of NCLELM models Thus, we expect to minimize the ensemble prediction errors of H pl on the labeled training set: whereŷ ens j is the ensemble prediction output of the jth labeled sample. By ensuring the ensemble prediction accuracy, the confidence of the pseudo labels can be further improved. It should be noted that, although many strategies can be used for achieving this combination, the simple averaging rule was chosen because complex schemes are likely to cause overfitting [71].
Combining the above four optimization objectives, we obtain the synthesized objective function for the MLPLO approach, to get high-confidence pseudo labels: where y u = y u 1,1 , y u 1,2 , ···, y u 1,P , ···, y u M,1 , y u M,2 , ···, y u M,P denotes the decision vector, [y min , y max ] are the lower and upper bounds with each pair of elements for one decision variable being denoted as y min,i,j , y max,i,j , where i = 1, 2, ···, M and j = 1, 2, ···, P. Note that the upper and lower settings can severely affect the efficiency of the MLPLO approach, so they should be determined carefully. To this end, the prediction uncertainty of the pseudo labels is first estimated through a probabilistic modeling technique and then utilized to aid in determining the boundary of the decision variables. Specifically, in this work, a GPR modeling technique [72] is applied to provide the confidence intervals of the pseudo labels, which includes three steps, i.e., first, building a GPR model from the labeled data D l , then, obtaining the prediction variances of the unlabeled data using the GPR model, and, finally, determining the search ranges based on the confidence intervals.
Solving the Pseudo-Labeling Optimization Problem
For the formulated MLPLO problem shown in Equation (24), it can be seen that NCLELM training and prediction are performed repeatedly during the calculation of the objective function, which means the objective function fails to satisfy continuity, differentiability, convexity, etc. Obviously, the classical optimization techniques are not suitable for solving such a optimization problem. Fortunately, evolutionary algorithms, inspired by biological phenomena, have earned great success in both machine learning and engineering applications for many tasks, such as feature selection, model selection, and optimal scheduling [73][74][75][76][77]. Unlike the classical optimization methods, evolutionary approaches conduct a parallel search, and they have the advantages of adapting to complex problems, where derivability and convexity are not required. Despite the availability of numerous evolutionary algorithms, for the sake of simplicity, one of the most well-known and commonly used approaches, genetic algorithms (GA), is chosen as an example to illustrate how to solve the MLPLOP problem through an evolutionary approach.
The strategy of producing high-confidence pseudo labels using GA optimization is illustrated in Figure 1. The details of individual representation, fitness evaluation, and evolutionary operations are presented, as follows: (1) Individual representation. In most cases, the quality variables in the industrial process are real numbers, thus the decision variables y u = y u 1,1 , y u 1,2 , ···, y u 1,P , ···, y u M,1 , y u M,2 , ···, y u M,P are encoded as chromosomes by real-number coding, as illustrated in Fig. 2. In addition, an initial population with N pop individuals is generated randomly within the range of [y min , y max ].
(2) Fitness evaluation. The goal of this step is to determine the quality of candidate solutions in the populations. In this case, we compute the value of the objective function defined in Equation (24) and use its reciprocal as the fitness value to evaluate the goodness of the chromosomes.
(3) Evolutionary operations. If the stopping condition is not satisfied, the GA needs to generate an offspring population by executing evolutionary operations; i.e., selection, crossover, and mutation. The selection operator can find the good solutions with the largest fitness values in a population. Crossover creates new individuals, by combing the genes of one individual with those of another. In mutation, the genes in individuals will make small random changes to create mutation children. Such an operation provides genetic diversity and allows enlarging the search space. (2) Fitness evaluation. The goal of this step is to determine the quality of candidate solutions in the populations. In this case, we compute the value of the objective function defined in Equation (24) and use its reciprocal as the fitness value to evaluate the goodness of the chromosomes.
(3) Evolutionary operations. If the stopping condition is not satisfied, the GA needs to generate an offspring population by executing evolutionary operations; i.e., selection crossover, and mutation. The selection operator can find the good solutions with the largest fitness values in a population. Crossover creates new individuals, by combing the genes of one individual with those of another. In mutation, the genes in individuals will make small random changes to create mutation children. Such an operation provides genetic diversity and allows enlarging the search space.
Building Diverse SSNCLEM Base Models
In order to obtain a good ensemble model, we need to construct accurate and diverse base models. Once the MLPLO process is completed, a group of pseudo-labeled samples can be obtained and further combined with the labeled set to build a set of SSNCLELM models. It is fairly easy to see that the performance of the SSNCLELM model is highly dependent on the quality of the optimized pseudo-labeled samples. Hence, to After performing the GA based MLPLO optimization, the optimal solution encoding the best pseudo labels is obtained, i.e., y u * = y u * ,1,1 , y u * ,1,2 , ···, y u * ,1,P , ···, y u * ,M,1 , y u * ,M,2 , ···, y u * ,M,P . Then, by combining y u * with its corresponding X u , we get the optimized pseudo-labeled set D pl = X u , y u * .
Building Diverse SSNCLEM Base Models
In order to obtain a good ensemble model, we need to construct accurate and diverse base models. Once the MLPLO process is completed, a group of pseudo-labeled samples can be obtained and further combined with the labeled set to build a set of SSNCLELM models. It is fairly easy to see that the performance of the SSNCLELM model is highly dependent on the quality of the optimized pseudo-labeled samples. Hence, to ensure the reliability of the MLPLO process, the following two problems should be handled properly: (1) Selection of the unlabeled samples. Though there are a large number of unlabeled data available for SSL learning in practical industrial processes, the number of unlabeled samples for the MLPLO approach cannot be too large. As indicated in Figure 2, the dimension of the decision variables is equal to the size of the selected unlabeled data; hence, inclusion of too many unlabeled samples will inevitably make the MLPLOP problem become a large-scale optimization problem [78]. In this case, it is very difficult to obtain good a search performance, efficiency, and effectiveness. In addition, introducing too many pseudo-labeled samples into the SSNCLELM model construction will also weaken the influence of the labeled samples, thus leading to high modeling risk. Therefore, it is more realistic to generate diverse small-scale pseudo-labeled sets and then construct diverse SSNCLELM base models. models NCLELM , , thus resulting in a set of diverse SSNCLELM models SSNCLELM , , SSNCLELM , , … , SSNCLELM , . By repeating the above procedure = × × times, we can obtain a total of × diverse SSNCLELM models: Φ : SSNCLELM , , SSNCLELM , , … , SSNCLELM , ⋮ Φ : SSNCLELM , , SSNCLELM , , … , SSNCLELM , (25) . (2) Setting of the trade-off parameters {γ 1 , γ 2 , γ 3 }. Besides the unlabeled data selection, another important factor affecting the optimization performance of the MLPLO approach, is determining an appropriate combination of trade-off parameters. In practice, it is a difficult task to find a globally optimal combination of the three parameters, and there is usually more than one combination that can meet the requirements. Considering this, a natural method is to apply diverse settings of trade-off parameters for the MLPLO process, thus avoiding the difficulties in parameter selection, as well generating diverse pseudo-labeled samples. This is very helpful for enhancing the diversity of SSNCLELM base models.
In light of the above problems, we aim to generate diverse SSNCLELM models through a multi-model perturbation mechanism, which combines random assignment of NCLELM model parameters, random selection of small-scale unlabeled samples for MLPLO optimization, and diverse settings of trade-off parameters {γ 1 , denote the specialized setting for the kth run of diverse SSNCLELM model construction, where {NCLELM init,i } M i=1 are generated by randomly assigning input weights and biases, the selected unlabeled data D u are randomly resampled from D u , and . By applying Φ k to the proposed MLPLO approach, an enlarged labeled set D l+pl = D l ∪ D pl k can be obtained and further used to update the initial models {NCLELM init,i } M i=1 , thus resulting in a set of diverse SSNCLELM models SSNCLELM k,1 , SSNCLELM k,2 , . . . , SSNCLELM k,M . By repeating the above procedure K = k 1 × k 2 × k 3 times, we can obtain a total of K × M diverse SSNCLELM models:
Combining Diverse SSNCLEM Base Models
After obtaining diverse SSNCLELM models, we need to combine these models in an appropriate manner. It is a common practice to utilize all base models for the ensemble construction. However, according to Zhou's the finding of 'many could be better than all' [71], it may be better to combine only part rather than all of the base models. The underlying reason is that there may exist poor base models, which can hurt the ensemble performance instead of improving it. Thus, ensemble pruning has been strongly emphasized for ensemble model construction [60,79,80]. Regarding the area of semi-supervised learning, the introduction of unlabeled and pseudo-labeled information may lead to unsafe semisupervised learning [81]. That is, the prediction capability of a model cannot be improved, but encounters deterioration after including the unlabeled and pseudo-labeled data.
Thus, we propose an ensemble pruning strategy based on performance improvement evaluation. First, the supervised prediction performance of a NCLELM model learnt from the labeled data is evaluated on an independent validation set D val = {X val , y val }. Then, a SSNCLELM model is trained from the augmented labeled set containing the pseudo-labeled data, and its performance is further evaluated using the same validation set. Finally, the performance improvement ratio is calculated, to decide whether the candidate SSNCLELM model is retained or not. Specifically, for the ith SSNCLELM, the performance improvement ratio (PIR) is calculated as follows: where RMSE init i represents the RMSE of the ith NCLELM on D val and RMSE i is the RMSE of the ith SSNCLELM on D val . Then, a threshold PIR th is required to select SSNCLELM models whose PIR values exceed PIR th . In general, PIR th can be set to 0, as used in our case studies, which means that those SSNCLELM models without performance improvement on the validation set are discarded.
After the ensemble pruning, suppose that a total of S ≤ K × M SSNCLELM models are retained for ensemble construction. Since the simple average rule cannot always function well, it is appealing to integrate the base models using a weighting scheme. Hence, in this work, a stacking strategy is employed for model combination. The basic idea of this approach is to use the built SSNCLELM models as the first-level learners, and then train the second-level learner by using the outputs of the first-level leaners as inputs, while the original output is still regarded as the output of the model. That is, Then, a stacking model can be built from ŷ val , y val , whereŷ val and y val . are used as input and output data, respectively. To effectively handle the collinearity issue among the SSNCLELM base models, a PLS stacking model is built. Thus, for one query sample x q , the ensemble prediction outputŷ q can be obtained aŝ where {β 0 , β 1 , · · · , β S } denotes the PLS regression coefficients, andŷ i q is the predicted value from the ith SSNCLELM model.
Implementation Procedure of the EnSSNCLELM Soft Sensor
The pseudo-code of the proposed EnSSNCLELM soft sensor method is described in Algorithm 1, and its workflow is illustrated in Figure 3.
; %% Building diverse SSNCLELM models 2: for k = 1 to K do %% Estimating the pseudo labels through the MLPLO approach 3: Generate M new initial NCLELM models {NCLELM init,i } M i=1 with the hyperparameters {N ELM , N node , λ} by randomly setting the input weights and biases; 4: Select a small-scale unlabeled set D u by randomly resampling from D u ; 5: Determine the pseudo labels of D u as the decision variables y u k = y u 1,1 , y u 1,2 , ···, y u 1,P , ···, y u M,1 , y u M,2 , ··· , y u M,P with P = U /M and set the lower and upper bounds [y min , y max ] for y u k based on GPR regression analysis; 6: Encode the decision variables as real-valued chromosome s, and randomly generate an initial population Pop = {s i } n pop i=1 with n pop individuals within the decision boundary; 7: Repeat n gen times: 8: Decode the pseudo labels of y u k from each individual chromosome in the population; 9: Evaluate the fitness of each individual s i in Pop according to Equation (24); 10: Generate an offspring population by performing selection, crossover, and mutation operations; mutation operations; 11: end of Repeat 12: Select the best individual s * from the final population; 13: Obtain the best pseudo labels y u * by decoding the chromosome s * and form the
Modeling data
Parameter setting 1 for pseudolabeling optimization
Decision variables
High-quality pseudo-labeled set 1
Case Studies
In this section, the performance of the proposed NCLELM and EnSSNCLELM soft sensor methods are evaluated through a simulated penicillin fermentation process and an industrial fed-batch chlortetracycline (CTC) fermentation process. The methods for comparison are as follows: (1) ELM [61]: the supervised single ELM model.
(2) NCLELM: the proposed supervised ensemble of ELM models using the NCL rule.
(3) EnNCLELMavg: the supervised ensemble of NCLELM models using the simple averaging rule.
(4) EnNCLELMpls: the supervised ensemble of NCLELM models using the PLS stacking strategy.
(5) CoELM: the co-training based semi-supervised ELM model, where the confidence evaluation method proposed by Zhou and Li (2005) [43] is used and the Euclidean
Case Studies
In this section, the performance of the proposed NCLELM and EnSSNCLELM soft sensor methods are evaluated through a simulated penicillin fermentation process and an industrial fed-batch chlortetracycline (CTC) fermentation process. The methods for comparison are as follows: (1) ELM [61]: the supervised single ELM model.
(2) NCLELM: the proposed supervised ensemble of ELM models using the NCL rule.
(3) EnNCLELM avg : the supervised ensemble of NCLELM models using the simple averaging rule.
(4) EnNCLELM pls : the supervised ensemble of NCLELM models using the PLS stacking strategy.
(5) CoELM: the co-training based semi-supervised ELM model, where the confidence evaluation method proposed by Zhou and Li (2005) [43] is used and the Euclidean distance is used to choose similar samples for confidence evaluation, and two different ELM models are trained using two randomly generated feature subsets.
(7) EnSSNCLELM avg : the semi-supervised ensemble of diverse SSNCLELM models using the simple averaging rule.
(8) EnSSNCLELM: the proposed method, the semi-supervised ensemble of SSNCLELM models using PLS stacking strategy.
To quantitatively assess the prediction performance of different soft sensors, the root-mean-square error (RMSE) and coefficient of determination (R 2 ) are used: where N test is the number of testing samples;ŷ n and y n are the nth predicted and actual values of the testing sample, respectively; and y is the mean value of the output variable.
Process Description
The penicillin fermentation process has been widely used for investigating the modeling, monitoring, and controlling of batch processes [82,83]. A flow diagram of the process is illustrated in Figure 4. Since formation of secondary metabolites (in this case, penicillin) is not associated with cell growth, the cells generally grow in a batch culture and then achieve synthesis of antibiotics through a fed-batch operation. In general, the penicillin production process lasts for 400 h, during which two cascade controllers are used to maintain the pH and temperature. In addition, the sterile substrate and air are continuously fed into the bioreactor to supply the nutrients for cell growth and product formation, as well as satisfying the oxygen consumption necessary for the microorganisms. In our experimental study, the process data were collected from the PenSim platform, which was developed by the Process Modeling, Monitoring, and Control Research Group of Illinois Institute of Technology for simulating the fed-batch penicillin fermentation process [84]. The software PenSim can be downloaded from the website on http://simulator.iit.edu/web/pensim/index.html (accessed on 5 August 2019). In this work, penicillin concentration was chosen as the difficult-to-measure variable, while the relevant process variables in Table 1 were used as the input variables of soft sensor models. Moreover, the sampling interval was set as 0.5 h and the default operation settings were considered. Table 1. Input variables for soft sensor development in the penicillin fermentation process.
Prediction Performance and Discussion of NCLELM
To verify the effectiveness and superiority of the proposed NCLELM, we conducted the experiments as follows: (1) Two application scenarios with different labeled data sizes were considered. The first case used 200 labeled samples collected from five batches for model training, while the second case used 800 labeled samples from the same batches. In addition, 800 labeled samples from another five batches were used for testing.
(3) A comparison between NCLELM and ELM was performed to demonstrate the advantage of the NCL rule. Since both methods exhibit a random nature, due to the random assignments of input weights and biases; for the sake of fairness, their prediction accuracies are provided as the simple average of those from 50 runs. Figure 5 illustrates the testing performance of the proposed NCLELM approach for different settings of parameters {N ELM , N node , λ}, where the prediction performance of ELM is also given for comparison. The effects of hyper-parameters {N ELM , N node , λ} on the NCLELM model performance are analyzed first. It is readily seen that, overall, the prediction errors of NCLELM decreased with the increase of N node . It is also noticeable that, when large N node was used, the performance enhancement resulting from the increase of N node became subtle. This was mainly because a high model complexity tends to cause overfitting when the training data are insufficient. For example, in the application case with 200 labeled training samples, as shown in Figure 5a, the testing accuracy was not improved significantly after N node exceeded 15, due to the scarcity of the labeled training data. In comparison, as shown in Figure 5b, the trend plots from the case with 800 labeled training samples decreased more sharply and the testing RMSE values continuously became smaller when N node increased.
Analysis and Comparison of EnSSNCLELM Prediction Results
To evaluate the effectiveness of the proposed EnSSNCLELM soft sensor method, five batches, including 4000 labeled samples, were collected for soft sensor training. To imitate the practical application scenario with scarce labeled data but rich unlabeled data, the obtained training set was further divided into two parts: a small-sized labeled training set with only 200 labeled samples, and a large-sized unlabeled training set with 3800 samples, whose actual labels had been removed. In addition, an independent validation set including 40 labeled samples was collected from two batches for ensemble pruning and stacking. Moreover, another five batches were obtained to assess the online prediction performance of the soft sensors.
To build well-performing soft sensor models, some critical parameters of different soft sensor methods should be determined in advance by cross-validation or trial and error, as follows: (1) ELM: The hidden node size was set as 20 by 5-fold cross-validation. Similarly to N node , N ELM also had a great impact on the NCLELM model complexity. It can be seen from the four subplots in Figure 5a, when a small N node was used for NCLELM, a large N ELM was more likely to produce a high prediction accuracy. However, when a large N node was considered, the increase of N ELM only led to a slight enhancement of the testing accuracy. A similar phenomenon can also be found in Figure 5b. Based on the above analysis, we can conclude that, when the training data size is small, a less complex model structure is preferable; while a more complex model, which can be built by increasing N node and/or N ELM , is required when the training data are sufficient.
Unlike N node and N ELM , λ was introduced to control the importance of the correlation based penalty objective in the training loss function. When the individual model complexity was not so high, e.g., in the case with a small N node , a large λ was required to ensure the prediction accuracy of NCLELM. One possible reason for this is that, in such a situation, it is desirable to emphasize the NCL effect of NCLELM, to guarantee the diversity among the ELM base models, which is crucial for building a high-performance ensemble model. As the model complexity grows, the benefits from introducing the NCL rule decline, even though a large λ is employed. One possible answer to this question is that, when a complex model structure is used, the diversity among the base ELM models for the NCLELM construction can be effectively maintained only through the inherent random assignments of the NCLELM parameter configurations. In this case, the NCL rule-based diversity improvement is not obvious. In the case of λ = 0, NCLELM degenerates as the simple averaging rule based ensemble of ELM models.
Subsequently, we compared the prediction accuracy of the NCLELM and ELM methods. As can be seen from the figure, similarly to NCLELM, the prediction performance of ELM was also enhanced with the increase of N node . When the training data are limited (e.g., N trn = 200), a moderate N node , such as 20 is acceptable, whereas a larger N node , such as 30, is more suitable for a case with a relatively large training data size, such as N trn = 800. Meanwhile, we found that the performance of NCLELM was always much better than that of ELM, regardless of the parameter settings and training data sizes, which implies the outstanding performance of NCLELM. The above experimental results fully confirm the effectiveness and superiority of the proposed NCLELM over traditional ELM.
Analysis and Comparison of EnSSNCLELM Prediction Results
To evaluate the effectiveness of the proposed EnSSNCLELM soft sensor method, five batches, including 4000 labeled samples, were collected for soft sensor training. To imitate the practical application scenario with scarce labeled data but rich unlabeled data, the obtained training set was further divided into two parts: a small-sized labeled training set with only 200 labeled samples, and a large-sized unlabeled training set with 3800 samples, whose actual labels had been removed. In addition, an independent validation set including 40 labeled samples was collected from two batches for ensemble pruning and stacking. Moreover, another five batches were obtained to assess the online prediction performance of the soft sensors.
To build well-performing soft sensor models, some critical parameters of different soft sensor methods should be determined in advance by cross-validation or trial and error, as follows: (1) ELM: The hidden node size N node was set as 20 by 5-fold cross-validation.
(3) CoELM and SSELM: N node for the two methods were selected as 20, the maximum number of iterations for CoELM was set as 300, and the penalty coefficient for SSELM was set as 0.3, by trial and error.
(4) EnSSNCLELM: The hyperparameters {N ELM , N node , λ} for the SSNCLELM base models were the same as those of NCLELM. The M and U were set to 3 and 300, respectively, for formulating the MLPLOP problem. In addition, the parameter configurations for the GA based MLPLO optimization of EnSSNCLELM were set as follows: the population size n pop = 50 and the maximum number of iterations n gen = 50 for GA optimization, and diverse combinations of trade-off parameters {γ 1 , γ 2 , γ 3 } were generated from the candidate set of {0, 0.01, 0.1, 0.5, 1}. In addition, the number of principal components of the PLS stacking model was determined by 5-fold cross-validation.
Considering that the EnNCLELM avg , EnSSNCLELM avg , and EnNCLELM pls methods are actually the degraded versions of the proposed EnSSNCLELM approach, their parameter configurations were set to the same as those of EnSSNCLELM. Under the above parameter settings, a total of 375 SSNCLELM models were first built based on the extended labeled data. After ensemble pruning, 350 models were retained for constructing the ensemble models EnSSNCLELM avg and EnSSNCLELM. Meanwhile, the NCLELM base models, corresponding to the SSNCLELM ones, were selected for EnNCLELM avg and EnNCLELM pls modeling.
The prediction results of different soft sensors on the testing set are tabulated in Table 2. It can be seen that ELM had the worst performance as a single model, and NCLELM delivered a better accuracy, due to the introduction of the NCL ensemble strategy. By using ELM as the base learning technique, two representative semi-supervised methods, i.e., CoELM and SSELM, were developed using co-training and embedding graph regularization, respectively. Though CoELM and SSELM include unlabeled data and, thus, achieved an accuracy improvement, their prediction performance was still very poor. Furthermore, by using NCLELM for base model building, two supervised ensemble models, namely EnNCLELM avg and EnNCLELM pls are developed by introducing a simple averaging rule and PLS stacking strategy, respectively. It is readily observed that, compared to NCLELM, both ensemble methods provided a better prediction accuracy, and the comparison between EnNCLELM pls and EnNCLELM avg further implied the superiority of the stacking scheme over to the simple averaging strategy. In addition, compared to EnNCLELM avg , EnSSNCLELM avg obtained a much better prediction accuracy, due to the utilization of unlabeled data information, which can be also revealed by the comparison between EnNCLELM pls and EnSSNCLELM. Overall, due to the efficient combination of ensemble and semi-supervised learning, the proposed EnSSNCLELM approach provided a much better prediction performance than the other traditional supervised and semi-supervised methods. Figure 6 displays the trend plots of the predicted and actual penicillin concentrations using ELM, EnNCLELM pls , and EnSSNCLELM methods, to provide a more intuitive comparison of model performance. It is evident that the predictions of NCLELM have large deviations from the actual values, especially in the fermentation period of 40-150 h. In comparison, EnNCLELM pls significantly reduced the deviations, due to the introduction of ensemble learning. Furthermore, we can see that EnSSNCLELM produced smaller deviations than EnNCLELM pls in the period of 70-180 h, although their prediction accuracies are comparable in other zones. The superior predictions of EnSSNCLEM were mainly due to the introduction of ensemble learning and the utilization of pseudo-labeled data. On the one hand, ensemble learning allows significantly reducing the prediction uncertainty from the base models, which is especially useful for handing the extreme deviations that often occur for a single model. On the other hand, using an enlarged training set can effectively improve the prediction capability of the base models, which is helpful for constructing a good ensemble.
Subsequently, the individual model performance and ensemble prediction results from the proposed SSNCLELM approach are illustrated in Figure 7. Since, EnSSNCLELM is essentially an ensemble method, both the quality of the SSNCLELM base models and the combination scheme are critical to the ensemble construction. First, the base model performance was investigated under the supervised and semi-supervised learning settings. To this end, once a NCLELM model was built, a corresponding SSNCLELM model with the same input weights and biases was trained by the enlarged labeled set. From the figure, it can be observed that the prediction RMSE values of SSNCLELM models were much smaller than those of the NCLELM models, except for in very few cases. Obviously, such significant performance enhancement is entirely attributed to the incorporation of semi-supervised learning. Then, the ensemble prediction performance using different combination strategies was compared. With the NCLELM and SSNCLELM base models, EnNCLELM avg and EnSSNCLELM avg were built using the simple averaging rule, while EnNCLELM pls and EnSSNCLELM employed PLS stacking. One can observe that, both EnNCLELM avg and EnSSNCLELM avg only obtained a prediction accuracy that was inferior to the best base model, which is mainly because the simple averaging ignores the differences among the base models. By contrast, EnNCLELM pls provided a prediction accuracy that was close to that of the best base model, while our proposed EnSSNCLELM approach was greatly superior to all base models. These results further confirm the effectiveness of the proposed pseudo-labeling technique and the PLS stacking strategy. larged training set can effectively improve the prediction capability of the base models, which is helpful for constructing a good ensemble. while EnNCLELMpls and EnSSNCLELM employed PLS stacking. One can observe that, both EnNCLELMavg and EnSSNCLELMavg only obtained a prediction accuracy that was inferior to the best base model, which is mainly because the simple averaging ignores the differences among the base models. By contrast, EnNCLELMpls provided a prediction accuracy that was close to that of the best base model, while our proposed EnSSNCLELM approach was greatly superior to all base models. These results further confirm the effectiveness of the proposed pseudo-labeling technique and the PLS stacking strategy. The success of the proposed EnSSNCLELM method depends highly on the quality of the obtained pseudo labels. Thus, one remaining question is how well the pseudo labels can coincide with the actual labels. Although in real-world applications, the actual The success of the proposed EnSSNCLELM method depends highly on the quality of the obtained pseudo labels. Thus, one remaining question is how well the pseudo labels can coincide with the actual labels. Although in real-world applications, the actual labels for the unlabeled data are unknown, in this case study, the true labels could be easily acquired from the simulated platform. Hence, as illustrated in Figure 8, we present two types of comparisons: evaluating the fitting degree of the pseudo and true labels, and comparing the testing performance, before and after introducing pseudo-labeled data. As indicated by the prediction results from the two example runs, the pseudo and the actual labels achieved a strong agreement with small prediction RMSE values. Moreover, it is noticeable that RMSE SSNCLELM was much lower than RMSE NCLELM , due to the inclusion of pseudo-labeled samples. These observations further imply the effectiveness and reliability of the proposed pseudo-labeling optimization technique.
The above results show that the proposed EnSSNCLELM method can effectively combine semi-supervised learning with ensemble learning, thus allowing resolving the label scarcity issue and improving the estimation reliability of difficult-to-measure variables in an industrial process. two types of comparisons: evaluating the fitting degree of the pseudo and true labels, and comparing the testing performance, before and after introducing pseudo-labeled data. As indicated by the prediction results from the two example runs, the pseudo and the actual labels achieved a strong agreement with small prediction RMSE values. Moreover, it is noticeable that RMSE was much lower than RMSE , due to the inclusion of pseudo-labeled samples. These observations further imply the effectiveness and reliability of the proposed pseudo-labeling optimization technique.
Process Description
Chlortetracycline (CTC) is a type of broad-spectrum antibiotic, which has been widely used in pharmaceutical and agricultural production and animal husbandry. The industrial CTC fermentation process under study is carried out by Charoen Pokphand Group Co., Ltd (Zhumadian, China)., as illustrated in Figure 9 [65]. Generally, CTC is produced through the cultivation of Streptomyces aureofaciens in an air-lift stirred fermenter with a volume of 120 m 3 . During the fed-batch cultivation process, real-time measurements of substrate concentration are desirable, to facilitate efficient feeding control. However, up to now, the feeding control is often still operated manually, due to the lack of reliable online hardware sensors for substrate concentration measurement, which is usually achieved through offline chemical analysis, with a long delay of 6 h. Thus, soft sensors are desirable to enable realtime estimations of substrate concentration. In this work, the process variables listed in Table 3 were used as inputs for soft sensor development.
Process Description
Chlortetracycline (CTC) is a type of broad-spectrum antibiotic, which has been widely used in pharmaceutical and agricultural production and animal husbandry. The industrial CTC fermentation process under study is carried out by Charoen Pokphand Group Co., Ltd (Zhumadian, China)., as illustrated in Figure 9 [65]. Generally, CTC is produced through the cultivation of Streptomyces aureofaciens in an air-lift stirred fermenter with a volume of 120 m . During the fed-batch cultivation process, real-time measurements of substrate concentration are desirable, to facilitate efficient feeding control. However, up to now, the feeding control is often still operated manually, due to the lack of reliable online hardware sensors for substrate concentration measurement, which is usually achieved through offline chemical analysis, with a long delay of 6 h. Thus, soft sensors are desirable to enable real-time estimations of substrate concentration. In this work, the process variables listed in Table 3 were used as inputs for soft sensor development.
Analysis and Comparison of Prediction Results
In this case study, a total of 14 batches of process data were collected from an industrial fermenter, including 351 labeled samples and 3183 unlabeled samples. Among
Analysis and Comparison of Prediction Results
In this case study, a total of 14 batches of process data were collected from an industrial fermenter, including 351 labeled samples and 3183 unlabeled samples. Among the labeled samples, 124 were utilized for model training, 29 were used for validation, and the remaining 198 samples served as the testing set. In addition, 3183 unlabeled samples were used for the pseudo labeling optimization and semi-supervised learning. Moreover, the parameter settings for different soft sensor models were the same as those in the application to the penicillin fermentation process. Consequently, a total of 375 SSNCLELM models were obtained, and 264 were selected using the ensemble pruning strategy for the construction of the EnSSNCLELM model.
The substrate concentration prediction results for the different soft sensor methods are presented in Table 4. It is obvious that ELM performed much worse than the other methods, while the proposed EnSSNCLELM method performed best. Compared with the supervised single ELM model, NCLELM achieved a performance enhancement, due to the introduction of NCL ensemble learning. Meanwhile, both CoELM and SSELM also reduced the prediction errors, due to the use of unlabeled data information. Furthermore, by applying an ensemble learning philosophy and using NCLELM as the base learning technique, EnNCLELM avg and EnNCLELM pls delivered a much better estimation performance than the single NCLELM. Comparing EnNCLELM pls EnNCLELM avg indicates that the PLS stacking outperformed the simple averaging rule. In addition, compared with EnNCLELM pls , the prediction accuracy of EnSSNCLELM was significantly improved, because of the incorporation of semi-supervised learning. A similar situation also appeared in the performance comparison of EnNCLELM avg and EnSSNCLELM avg . Similar to the application in the penicillin fermentation process, once again, the application results demonstrated the superiority of the proposed method over the traditional supervised and semi-supervised methods. More intuitively, the scatter plots of the predicted versus actual substrate concentration values using the NCLELM, EnNCLELM pls , and EnSSNCLELM models are depicted in Figure 10. In the scatter plot, the accuracy of the predictive model is evaluated by the closeness of points to the diagonal line. The closer the scatter points are to the diagonal line, the higher the model accuracy. From the figure, one can see at a glance that the test samples exhibit an unbalanced distribution. In the small-value zone (about 1-4), the testing samples are dense, while in the large-value zone (about 4-7), the samples are sparse. Furthermore, it can be readily observed that the NCLELM, EnNCLELM pls , and EnSSNCLELM methods provided a different prediction performance. For the small-value zone, NCLELM suffers from large deviations from the diagonal line, implying a poor prediction accuracy. EnNCLELM pls reduces the deviations by introducing ensemble learning, thus, indicating a significant accuracy improvement. In comparison, EnSSNCLELM achieved the best closeness to the diagonal line, due to the exploitation of unlabeled data, which led to further enhancement of the model performance. Differently, in the large-value zone, one can find an interesting aspect, in that EnNCLELM pls performed even worse than NCLELM. One possible reason is that some of the NCLELM base models for the ensemble construction of EnNCLELM pls were not well learnt, due to the insufficiency of training samples in the large-value zone, and, thus, the combination of such models resulted in a performance deterioration instead of improvement. By contrast, the proposed EnSSNCLELM approach still obtained the most competitive prediction accuracy in the large-value zone, which was mainly because the inclusion of high-confidence pseudo-labeled data significantly improved the generalization capability of the SSNCLELM base models.
In addition, Figure 11 presents the performance of the individual models before and after incorporating semi-supervised learning for the industrial fed-batch CTC fermentation process. One can see that SSNCLELM base models achieved a better performance than the NCLELM base models, except for a few cases, which is attributed to the augmentation of the labeled training data, resulting from the good pseudo-label data. After introducing the stacking combination, EnNCLELM pls delivered a performance close to that of the best base model, while EnSSNCLELM produced much better predictions than all the base models. In comparison, both EnNCLELM avg and EnSSNCLELM avg did not deliver a prediction accuracy enhancement. This was mainly because the simple averaging rule cannot consider the differences among the base models. Overall, the competitive performance of EnSSNCLELM was mainly due to the introduction of semi-supervised learning for enhancing the accuracy and diversity of the base models, as well as the utilization of ensemble learning, to further improve the prediction accuracy and reliability. of such models resulted in a performance deterioration instead of improvement. By co trast, the proposed EnSSNCLELM approach still obtained the most competitive predi tion accuracy in the large-value zone, which was mainly because the inclusion high-confidence pseudo-labeled data significantly improved the generalization capab ity of the SSNCLELM base models. In addition, Figure 11 presents the performance of the individual models before an after incorporating semi-supervised learning for the industrial fed-batch CTC ferment tion process. One can see that SSNCLELM base models achieved a better performan than the NCLELM base models, except for a few cases, which is attributed to the au mentation of the labeled training data, resulting from the good pseudo-label data. Aft introducing the stacking combination, EnNCLELMpls delivered a performance close that of the best base model, while EnSSNCLELM produced much better predictions tha all the base models. In comparison, both EnNCLELMavg and EnSSNCLELMavg did n deliver a prediction accuracy enhancement. This was mainly because the simple ave aging rule cannot consider the differences among the base models. Overall, the compe tive performance of EnSSNCLELM was mainly due to the introduction semi-supervised learning for enhancing the accuracy and diversity of the base models, well as the utilization of ensemble learning, to further improve the prediction accura and reliability. Figure 11. The performance of individual models before and after incorporating semi-supervised learning for industrial fed-batch CTC fermentation process.
The above case study results further indicate the effectiveness of the proposed MLPLO technique in extending the labeled training data and verify the superiority of the proposed EnSSNCLELM method over traditional soft sensor methods, for providing accurate quality variable predictions in industrial processes, especially for application scenarios with limited labeled data.
Conclusions
A novel semi-supervised soft sensor modeling method, referred to as EnSSNCLELM, was proposed for the quality variable prediction of industrial processes, where the labeled data are scarce but the unlabeled data are rich. The proposed approach aims to enhance the inferential performance by leveraging both the labeled and unlabeled data, as well as combining semi-supervised and ensemble learning paradigms to achieve complementary strengths. First, to reduce the prediction instability and improve the prediction accuracy of ELM, the NCLELM algorithm was developed by introducing the NCL rule into ELM modeling. In this way, the diversity among the base ELM models Figure 11. The performance of individual models before and after incorporating semi-supervised learning for industrial fed-batch CTC fermentation process.
The above case study results further indicate the effectiveness of the proposed MLPLO technique in extending the labeled training data and verify the superiority of the proposed EnSSNCLELM method over traditional soft sensor methods, for providing accurate quality variable predictions in industrial processes, especially for application scenarios with limited labeled data.
Conclusions
A novel semi-supervised soft sensor modeling method, referred to as EnSSNCLELM, was proposed for the quality variable prediction of industrial processes, where the labeled data are scarce but the unlabeled data are rich. The proposed approach aims to enhance the inferential performance by leveraging both the labeled and unlabeled data, as well as combining semi-supervised and ensemble learning paradigms to achieve complementary strengths. First, to reduce the prediction instability and improve the prediction accuracy of ELM, the NCLELM algorithm was developed by introducing the NCL rule into ELM modeling. In this way, the diversity among the base ELM models is explicitly considered during NCLELM modeling. Second, by using NCLELM as the base learning technique, we proposed a MLPLO approach to provide high-confidence pseudo-labeled data, where the issue of estimating pseudo labels is formulated as an explicit optimization problem and solved through evolutional optimization. In addition, based on the extended labeled data, a set of diverse SSNCLELM models were built. Finally, these models were combined using a stacking strategy, after ensemble pruning according to the performance enhancement of semi-supervised models against supervised models. The application results from two chemical processes indicated that the proposed algorithm allows for a significant performance improvement over the traditional supervised and semi-supervised soft sensor methods.
The success of EnSSNCLELM is highly dependent on obtaining high-quality pseudolabeled data, as well as building and combing diverse and accurate semi-supervised base models. Thus, how to formulate and solve the PLOP problem deserves further studies, e.g., defining more effective optimization objectives and employing multi-objective pseudolabeling optimization, instead of a single-objective method. Furthermore, it is of great importance to ensure safe learning of the base models, i.e., avoiding reducing learning performance significantly when using unlabeled data. This may be achieved through efficiently selecting well-performing base models for ensemble construction and utilizing informative unlabeled data for semi-supervised modeling. These issues are left to be investigated in the near future.
Conflicts of Interest:
The authors declare no competing financial interest. | 16,053.8 | 2021-12-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Propagating Uncertainties in the SALT3 Model Training Process to Cosmological Constraints
Type Ia supernovae (SNe Ia) are standardizable candles that must be modeled empirically to yield cosmological constraints. To understand the robustness of this modeling to variations in the model training procedure, we build an end-to-end pipeline to test the recently developed SALT3 model. We explore the consequences of removing pre-2000s low-$z$ or poorly calibrated $U$-band data, adjusting the amount and fidelity of SN Ia spectra, and using a model-independent framework to simulate the training data. We find the SALT3 model surfaces are improved by having additional spectra and $U$-band data, and can be shifted by $\sim 5\%$ if host galaxy contamination is not sufficiently removed from SN spectra. We find that resulting measurements of $w$ are consistent to within $2.5\%$ for all training variants explored in this work, with the largest shifts coming from variants that add color-dependent calibration offsets or host galaxy contamination to the training spectra, and those that remove pre-2000s low-$z$ data. These results demonstrate that the SALT3 model training procedure is largely robust to reasonable variations in the training data, but that additional attention must be paid to the treatment of spectroscopic data in the training process. We also find that the training procedure is sensitive to the color distributions of the input data; the resulting $w$ measurement can be biased by $\sim2\%$ if the color distribution is not sufficiently wide. Future low-$z$ data, particularly $u$-band observations and high signal-to-noise ratio SN Ia spectra, will help to significantly improve SN Ia modeling in the coming years.
INTRODUCTION
Since the discovery of cosmic acceleration (Riess et al. 1998;Perlmutter et al. 1999), Type Ia supernovae (SNe Ia) have played an important role in constraining the dark energy equation of state, w.Calibrated SN Ia distances at low redshift are also used to derive the most precise local measurements of the Hubble constant (H 0 ; Riess et al. 2022), currently in tension with inferred H 0 Corresponding author: M. Dai<EMAIL_ADDRESS>from the cosmic microwave background (Planck Collaboration et al. 2018; see reviews from Verde et al. 2019;Di Valentino et al. 2021).
The most recent cosmological constraints from SNe Ia use up to ∼1500 SNe Ia (Scolnic et al. 2018;Jones et al. 2019;Brout et al. 2022) and in the near future, surveys such as Vera Rubin Observatory's Legacy Survey of Space and Time (The LSST Dark Energy Science Collaboration et al. 2018) and the SN survey from the Nancy Grace Roman Telescope (Hounsell et al. 2018;Rose et al. 2021) will increase the number of wellmeasured SNe Ia by orders of magnitude.Although the statistical uncertainties of the cosmological measurements will be greatly reduced with these future data sets, understanding and reducing the systematic uncertainties will be crucial.Most previous studies have found that the dominant systematic uncertainties in SN Ia cosmology analyses are caused by photometric calibration of the data in the cosmology sample and the sample used to train the SN for standardization (Betoule et al. 2014;Scolnic et al. 2018;Brout et al. 2019;Jones et al. 2019;Brout et al. 2022).However, explorations of systematic uncertainties in the SN standardization model are typically limited to photometric calibration offsets; potential systematic errors in the training process or definition of the model have been explored (e.g., Mosher et al. 2014) but are rarely propagated to cosmological parameter uncertainty budgets.
Substantial recent effort has been put into developing new SN Ia models (Saunders et al. 2018;Léget et al. 2020;Boone et al. 2021;Pierel et al. 2021;Kenworthy et al. 2021;Mandel et al. 2022).All SN Ia models for cosmology to date are empirical models and therefore require a training sample of well-measured SNe (with or without spectral data); therefore, it is important to understand how robust a SN model is to the choice and quality of the training sample, and the effect of that training sample on the resulting cosmological parameter measurements.In particular, given the large impact of photometric calibration uncertainties on the SN model training, other systematic uncertainties related to the model training must also be better understood.Finally, it is important to understand the ways in which new data could enhance the model, such as data from ATLAS (Tonry et al. 2018), the Zwicky Transient Facility (Bellm et al. 2019), the Carnegie Supernova Project (CSP, Krisciunas et al. 2017), and the Young Supernova Experiment (Jones et al. 2021).
The SALT2 model (Guy et al. 2007(Guy et al. , 2010;;Betoule et al. 2014;Taylor et al. 2021) is the baseline SN standardization model used for nearly all measurements of w in recent years.Advantages of the SALT2 model include its ability to use high redshift SNe for training, built-in k-corrections due to its continuous SED model across both phase and wavelength, and the fact that the SN amplitude is a free parameter in the training procedure, making the training independent of the cosmological model.Unfortunately, the original training code and data are not fully publicly available, and are difficult to modify and improve for systematic studies, with Mosher et al. (2014) being the most recent study to explore alterations to the SALT2 training procedure.However, a new "SALT3" model has recently been developed (Kenworthy et al. 2021, hereafter K21), which shares much of the functional form as SALT2 but includes improvements in the training procedure and includes new data that extends the model wavelength range from ∼ 0.9 to 1.1 µm.Though we use the K21 model in this work, we note that Pierel et al. (2022) recently extended the SALT3 model surfaces to 2 µm, albeit using a data set with somewhat larger calibration uncertainties, and Jones et al. (2022) built a SALT3 model that includes a host-galaxy dependent model surface.
Here, we use the open-source SALT3 model training code, SALTshaker1 , to quantify how training data variations or unknown physics in the data can affect the measurement of cosmological parameters.In particular, we investigate the exclusion of low-z training data without precisely measured filter throughputs, the exclusion of poorly calibrated ground-based near-UV bands, variations in the number of SN Ia spectra, and the impact of wavelength-dependent calibration errors and host-galaxy contamination in those spectra using simulations generated from a previously trained SALT2 model (Pierel et al. 2018).Furthermore, we use simulations generated from BYOSED (Pierel et al. 2021), a model framework that is independent of SALT, to determine the sensitivity of cosmological parameters to SED features that are not modeled by the SALT framework.The BYOSED framework generates SN Ia SEDs using a base model with added "perturbers" derived from composite spectra of different SN Ia sub-populations; this produces variations in the SN spectra that are not modeled with existing frameworks, such as velocity and hostgalaxy mass variations.
For each of these training scenarios, we use a simulation-based approach to quantify how variations in the available training data or unknown physics affects the recovery of the trained model, the correlations between light-curve parameters and luminosity, and the measured value of w.For this purpose, we have built an end-to-end simulation and analysis pipeline, which will enable future analyses to propagate modeling systematics into the systematic uncertainty budget of cosmological constraints.
In Section 2 we describe our analysis and pipeline, and in Section 3 we discuss our simulation approaches and training sample variations.In Section 4 we discuss the recovered models, nuisance parameters, and cosmological parameters, and in Section 5 we discuss our results and conclude.
DESCRIPTION OF THE ANALYSIS AND THE PIPELINE
Modern SN Ia cosmology analyses include a complex set of stages to go from input photometric SN Ia light curve data to cosmological parameter measurements.Those stages include fitting for light-curve parameters in order to standardize the SN brightness, correcting for observational biases using simulations, estimating nuisance parameters that relate stretch and color to brightness, computing a covariance matrix to account for systematic uncertainties, and finally, fitting for cosmological parameters.The community has developed tools to perform these individual tasks (e.g., Kessler et al. 2009a;Guy et al. 2010;Rubin et al. 2015;Kessler & Scolnic 2017) and a pipeline to control the workflow (Hinton & Brout 2020).
Typically missing from the evaluation of systematic uncertainties, however, are estimations of systematics related to the training procedure.Here, we build an end-to-end cosmological analysis pipeline that includes a SN model training stage.While previous studies such as the Pantheon+ analysis (Scolnic et al. 2021) have performed model re-training to incorporate calibration offsets in the SALT model, here we build a more flexible framework for SALT training options and systematic uncertainty evaluation.This open-source pipeline is built in Python and ties together pre-existing code to perform many of the steps necessary to go from input data or simulations to cosmological parameter estimation.In particular, many stages are built around methods implemented within the SNANA software (Kessler et al. 2010(Kessler et al. , 2019a)); SNANA is a collection of SN analysis methods that perform simulations, light curve parameter estimation, distance measurement, and cosmological parameter fitting.The individual components of the pipeline are described in more detail below.
The input model in this analysis pipeline is flexible, but the model training and analysis stages currently assume a SALT3 model defined as where M 0 and M 1 are model components that describe an average spectral surface and its first-order variation, and CL is the color law, which is a polynomial between 2800 and 8000 Å and is extrapolated beyond those wavelength ranges.The parameters x 0 , x 1 and c are light curve parameters that describe the overall amplitude, shape, and color of the light curve.
Using those light-curve parameters, distances are measured using the Tripp estimator (Tripp 1998): where x 1 and c are stretch and color parameters from a SALT3 light-curve fit, and m B is the log of the lightcurve amplitude x 0 .Nuisance parameters α and β are estimated to determine the correlation of x 1 and c with luminosity.M is a combination of the SN absolute magnitude and H 0 , which are degenerate in this work.∆µ bias is the bias correction term that is determined from simulations and used to correct for selection biases.Most versions of the Tripp estimator used in the last decade also include the variable γ, the correlation between host-galaxy mass and distance measurement (Kelly et al. 2010;Lampeitl et al. 2010;Sullivan et al. 2010), but we do not include it in this work since it is a second-order effect and requires proper simulation of the relations between the host-galaxy mass, the SN light-curve parameters, and the SN brightness (Smith et al. 2020;Popovic et al. 2021).
The stages of the pipeline are described below and illustrated in Figure 1: time at peak and the light curve parameters as input to the training process.
• Generating "BiasCor" Simulations.To correct the measured light curve parameters for sample selection biases, the pipeline again generates a large sample of SNe Ia from the newly trained SALT3 model; this sample is typically a factor of 10-100 times larger than the training or test samples to prevent noise in the bias corrections from significantly contributing to distance uncertainties.The extent to which SALT3 does not accurately model the true SN Ia features will propagate to errors in the bias correction.
• Light-Curve Fitting.The simulated photometry of the test sample and the BiasCor sample are fit with the newly trained SALT3 model.
• Bias Correction and Distance Measurement.After light-curve fitting, the BiasCor simulations are used to determine a mapping between simulated versus measured light-curve parameters as a function of redshift using the "BEAMS with Bias Corrections (BBC)" method (Kessler & Scolnic 2017).The BBC method determines the nuisance parameters α and β and applies Equation 2 to measure distances.For computational efficiency, we use a 1D bias correction method within the BBC framework, which corrects those distances as a function of redshift for selection effects and returns maximum-likelihood distances binned by redshift.
• Cosmological Parameter Estimation.Finally, the pipeline uses these distances to fit a wCDM model in a maximum likelihood fit to measure w with a cosmic microwave background (CMB) prior using the R(z * ) shift parameter (e.g.Eq. 69 in Komatsu et al. 2009) with σ R = 0.007.This is a computationally efficient way of producing CMB constraints with a constraining power that is similar to Planck Collaboration et al. (2018).Finally, the pipeline evaluates biases in the measured cosmology with respect to the input cosmology.
Our pipeline is publicly available from the SALTShaker package2 .
EVALUATING SYSTEMATIC UNCERTAINTIES FROM THE SALT MODEL
To quantify biases on measurements of cosmological parameters due to the SALT3 training procedure, we create realistic simulations and use them as input "data" to the SALTShaker training code.In different simulations, we vary the training set or the underlying properties of the SED model used to generate the SN Ia sample.We describe each of these simulations in detail below and we analyze the effect of each simulation on the trained SALT3 model and on the recovered cosmological parameters after re-training the SALT3 model for each in Section 4.
In K21, SNANA simulations were generated that reproduced the measured light-curve parameters of the real data in order to create a simple approximation of the distribution of data for a simulation-based training.Here, we instead adopt an approach in which light-curve parameters are drawn from Monte Carlo-sampled distributions that will on average yield the same x 1 and c distributions as the real data.In the baseline simulations and the variants in Section 3.2 below, we use SALT2-extended as our input model to allow our simulated model to be independent of the SALTShaker training process.
For the samples discussed above, SNANA simulations have already been developed as part of previous cosmological analyses.The low-z, SDSS, Pan-STARRS and SNLS simulations used here are from Scolnic et al. (2018) with x 1 and c populations from (Scolnic & Kessler 2016), the DES simulations are from Kessler et al. (2019b), and the Foundation simulations are from Jones et al. (2019).We keep the same x 1 /c distributions that are developed by the works above.The parameters of the x 1 /c distributions for each survey are summarized in Appendix A. We note that the same distributions are used for simulating both the training sample (TrainSim) and the test sample (TestSim) in the pipeline.Each pipeline run generates one TrainSim for one TestSim.A comparison between the light-curve parameters measured from simulations versus real training data is shown in Figure 2. Spectra are also generated as part of these simulations as discussed in K21 such that the total number of spectra, the distribution of their phases, and their noise properties as a function of wavelength for each individual survey are approximately equal to the real K21 training sample.
Varying the Input Training Data
We modify the input data used for SALT3 training in several ways, listed below, to explore the potential effect of removing less reliable data from the model training and the impact of the number and quality of the spectroscopic data on the resulting model surfaces.• Removing low-z SNe without measured filter throughputs.The low-z sample used in K21 is a compilation of data from various surveys dating back to the 1990s.Prior to the CfA3 sample (Hicken et al. 2009), the filter transmission functions were estimates and color transformations were used so that synthetic colors matched observations of Landolt standard stars.The lack of precise filter transmissions can introduce unknown systematic uncertainties in the calibration of the SN photometry.In particular, Scolnic et al. (2015); Brout et al. (2021) found that the photometry of these surveys could be systematically off by up to 3%, but they lacked the statistics to correct for these offsets.
Thanks to the recent wealth of low-z data from CSP, Foundation, and the later CfA surveys, it is possible to train the SALT3 model after removing data without measured filter transmissions: the Calan/Tololo and CfA1-2 low-z samples.Rather than testing the systematic effect of calibration offsets,we test the statistical impact of excluding these low-z samples from the training.All other simulated data for this variant remain the same as in our baseline simulations, although the total amount of photometric and spectroscopic data is significantly smaller due to removing these samples.On average, each random sample of this variant consists of 1000 SNe with 609 spectra.The Dai et al.
loss of SNe is about 4%, while the loss of spectra is nearly 50%.
• Removing U -band data.The observer-frame ultraviolet data (U /u band, central wavelength <4000 Å) from low-z surveys and SDSS are particularly difficult to calibrate, and in past analyses U -band model trainings have been found to be unreliable (Kessler et al. 2009b).Recent efforts to improve the calibration of legacy SN Ia data (Scolnic et al. 2015;Currie et al. 2020;Brout et al. 2021) have not attempted to recalibrate the U/u bands.Because the SALT procedure can use well-calibrated high-z data in the optical bands (with central wavelength > 4000 Å) to train the UV model, it might be preferable to omit these poorly calibrated data entirely.We therefore test the effect of removing the U /u band data from our simulated training set.
• Reducing the number of spectra.Including both photometric and spectroscopic data in the SALT3 training allows the model to more reliably account for variation in spectral features, improving the fidelity of K-corrections.However, spectral data are expensive to obtain relative to photometric observations and must be iteratively recalibrated to match the best-fit model during the training process.For this reason, we test the effects of randomly removing half of the spectra from the simulated training data; this allows us to explore the degree to which spectroscopic data can be removed from the training process while still yielding reliable SN Ia distances.
The relative, wavelength-dependent flux calibration of SN spectroscopy can be highly uncertain.For this reason, SALTShaker re-calibrates the spectra to match the best-fit SALT model during each iteration of the training process.We test the robustness of this recalibration procedure by including simulations of mis-calibrated spectra in the training data.For each simulated spectra, a multiplicative calibration warp in the form of dF/dλ → dF/dλ × (1 + sλ) is applied, with s being randomly selected between −10 −5 Å−1 and +10 −5 Å−1 .This degree of re-calibration changes the spectral flux by up to ±6% over the range from 2800 to 8700 Å, which are the minimum and maximum effective wavelengths of the photometric filters used in the training.The SALTShaker spectral recalibration procedure uses a polynomial to estimate the spectral warping; however, unlike the warping used in the simulations, the spectra are recalibrated using the exponential of a polynomial (a different functional form from the simulations), with coefficients estimated during the training process (K21, their Equation 6).
• Including host-galaxy contamination in the spectra.In many galaxies, particularly at higher redshifts, the brightness of a SN Ia is often comparable to the surface brightness of its host galaxy.
There can be large uncertainties in subtracting host-galaxy light from the SN spectrum, which could add unphysical spectral features to the SN model or even shift the model's color if the spectral re-calibration procedure is insufficient to remove higher-order calibration offsets.In the real K21 input data, host-galaxy emission lines were removed in the low-z data, and contamination was removed in the high-z data by fitting for a simultaneous SN and host spectrum, and subtracting the host component (Guy et al. 2007).However, this procedure could leave some fraction of residual host contamination.
We simulate the host-galaxy contamination effect by adding a scaled host-galaxy spectrum to the SN spectra, with the host-galaxy spectrum randomly selected from a small host-galaxy library.This host-galaxy library is compiled from a random subset of high-S/N spectra of the host galaxies of SNe discovered during the Lick Observatory Supernova Search (Filippenko et al. 2001).The host-galaxy spectrum is scaled such that where dF host /dλ is the host spectra, dF peak /dλ is the SN spectrum at peak brightness, and S host is the normalization factor for the host spectrum for a given host-galaxy contamination fraction (HOSTSNFRAC).We test HOSTSNFRAC of 100%, 50% and 10%, with respect to the SN brightness at peak.Example simulated SN spectra with different fractions of host contamination and their simulated host-galaxy spectra are shown in Figure 3.
Varying the Input SN SED with the BYOSED Model
The SALT formalism models the SN Ia SED with only one shape and one color parameter, but additional variability in SN Ia spectra and photometry has been seen (e.g., Fakhouri et al. 2015;Saunders et al. 2018;Léget et al. 2020;Boone et al. 2021).Observed correlations between Hubble residuals and host-galaxy mass (Kelly et al. 2010;Lampeitl et al. 2010;Sullivan et al. 2010), other host properties (Rigault et al. 2013;Jones et al. 2018;Roman et al. 2018;Rigault et al. 2020;Kelsey et al. 2021), host-galaxy reddening (Brout & Scolnic 2021;Rose et al. 2022;Meldorf et al. 2022), and the potential correlation between Hubble residuals and SN ejecta velocity (Siebert et al. 2019;Dettman et al. 2021), give additional evidence for variability beyond the standard SALT parameters.
BYOSED enables us to test the reliability of the SALT formalism for measuring accurate distances from a flexible suite of SN Ia models.These models are not necessarily intended to be true representations of real SN Ia data, but instead are a priori plausible models for the ways in which SN Ia spectra could depend on the SN shape, color, and additional standardization parameters.We use the following BYOSED model variants to test the SALTshaker training framework: • Shape and Color.We follow Pierel et al. (2021) in simulating a baseline Hsiao et al. (2007) model with shape variations modeled as a simple wavelength-independent time dilation and color variations modeled using the SALT2 color law.
The Hsiao et al. (2007) model presents an average spectrum at every phase.While this model uses the same color law as the SALT2 model, it does not have the SALT spline-basis interpolation features imprinted on the model surface.
• Spectral variation as a function of hostgalaxy mass.Following Pierel et al. (2021), we add simulated host galaxy-based variation to the shape+color simulations above using the same host-galaxy mass perturbers created by Pierel et al. (2021).Pierel et al. (2021) generates the perturbers by making composite spectra from SNe with host-galaxy masses log(M * /M ⊙ ) > 10 and log(M * /M ⊙ ) < 10 from the Kaepora database (Siebert et al. 2019).Though some of the observed spectral variation is likely due to differences in the distribution of x 1 and c between highand low-mass host galaxies, these composites are an approximation for testing how effectively a SN that varies as a function of its host-galaxy properties can be modeled by a procedure that does not include host-galaxy dependence in its model assumptions.Fifty percent of our simulated SNe use the low-mass composite versus the high-mass composite.We simulated two different host-galaxy mass perturbers as in Pierel et al. (2021), one with a static host-galaxy component with no redshift evolution, the other with a host-galaxy component that decreases in amplitude as a function of redshift.
• Spectral variation as a function of SN velocity.Similar to the host-galaxy mass procedure above, we added velocity perturbers constructed by Pierel et al. (2021).These velocity perturbers are generated by constructing spectral composites for SNe with low and high Si II velocities (the divide between low and high velocity is located at the sample median of 11,000 km/s).We draw from the observed distribution of Si II velocities following Pierel et al. (2021) to randomly assign velocities to SNe.This variant is meant to mimic the spectral variations that caused a 2.7-σ shift in Hubble residual as a function of Si II velocity found by Siebert et al. (2019).
RESULTS
In this section, we compare results from each of the SALT3 training variants presented in the previous section.We first run our pipeline (Section 2) 40 times for the baseline scenario described in Section 3.1; the baseline simulations were designed to reproduce the SALT3.K21 training sample.Then for each training variant described in Section 3.2 and Section 3.3, we run our pipeline 20 times.Each pipeline run starts with a different random seed.We compute the average of the trained model surfaces, inferred distances, nuisance parameters, and measurements of w for the 40 baseline runs and the 20 runs of each variant.We compare the averages of each 20 variant results to the average of the 40 baseline results.
First, however, we examined w results from the baseline simulations and found a surprising w bias of −0.024 ± 0.006 relative to ΛCDM.We traced this bias to low-z simulations that adopted the Scolnic & Kessler (2016) x 1 and c distributions for the legacy low-z data, which is the sample containing most of our spectra and some of the best-sampled photometry.These distributions do not fully match those of the K21 training sample and in particular the color distribution of the K21 data is wider; when simulating the low-z training sample with x 1 and c distributions that more closely match those of K21, we find a statistically insignificant w bias of w = −1.005± 0.009.We discuss the potential reasons for the w bias in the baseline version of the simulations in Appendix B and conclude that training sets consisting of SNe with a wider distribution of colors, as in K21, will be more robust to cosmological biases.
For simulations that use the BYOSED models as the underlying models, we only compare the distances, and the measurements in w, since it is not meaningful to compare the model surfaces when the underlying models used for simulation are different.We compare the BYOSED variants with the BYOSED stretch and color model described in 3.3, which serves as the baseline for the BYOSED models.The simulations that use the baseline BYOSED model yield w = 0.971 ± 0.016, which is consistent with an expected offset found in Pierel et al. (2021).
For convenience, we define short names for each training variant in Table 1.Below, we discuss variations in the trained model surfaces and the color law (Section 4.1), changes in the resulting distance moduli (Section 4.2) and changes in the nuisance parameters and cosmological parameters (Section 4.3).
Variations in the trained model
In this section, we examine the differences in the trained SALT3 model surfaces given the training variants described in Section 3, including the principal components M 0 and M 1 and the color law CL.
Spectral components
We first examine the changes in model components with respect to the baseline M 0 component: We note that because M 1 can be zero or negative, we define it fractionally with respect to M 0 .Therefore, ∆M 1 /M 0,baseline is the fractional difference in the predicted light curve or spectrum of a SN having an x 1 parameter that is one standard deviation away from the mean.
Figure 4 shows the relative model changes in wavelength space for the SALT3 model components at SN peak brightness.For the variants shown in Figure 4 (left), the M 0 components are consistent within 2% with the baseline model between 3000 Å and 7500 Å, but deviate more in the bluer and redder regions beyond that wavelength range, by up to ∼ 10%.The M 1 components are mostly consistent despite a larger variation in the bluer region below 3000 Å.For variants with different fractions of host-galaxy contaminations shown in Figure 4 (right), the models are mostly consistent within 5% with respect to the baseline when there is 10% hostgalaxy light, but unsurprisingly, large (>5%) spikes are seen in the M 0 components and some regions of the M 1 components when there is greater host-galaxy contamination.
Integrated fluxes
In Figure 5, we show the relative model flux variations in the U BV RI bands.We integrate the model components M 0 and M 1 over the U BV RI passbands for each variant, and show the relative changes with respect to the baseline M 0 model flux (also integrated over Including host-galaxy contamination with a fraction as bright as 100% of the SN peak brightness HOST-50 Including host-galaxy contamination with a fraction as bright as 50% of the SN peak brightness HOST-10 Including host-galaxy contamination with a fraction as bright as 10% of the SN peak brightness BYO-STRETCH-COLOR Baseline BYOSED model with stretch and color effects BYO-VEL BYOSED model with a velocity effect added to the baseline BYO-HOST BYOSED model with a static host-galaxy mass effect added to the baseline BYO-HOST-Z-DEP BYOSED model with a redshift dependent host-galaxy mass effect added to the baseline the U BV RI passbands): We find that the M 0 component for the BV RI light curves is consistent to within 2% at phases greater than −15 days for NO-LOWZ, NO-U, MIS-CAL-SPEC and HALF-SPEC.The M 1 component shows larger variations, up to ∼3% and biases of up to 4% relative to M 0 .
Dai et al.The U -band model fluxes show significantly greater variation compared to the other bands, however; the training variants NO-LOWZ, NO-U, MIS-CAL-SPEC and HALF-SPEC all have substantially higher variation compared to the other bands in the UV model fluxes.This is unsurprising, as each of these variants removes a significant fraction of the available U -band spectra or photometry.This test therefore indicates that for the U -band SALT model to be robust, additional observations are needed to cover the rest-frame U band, likely with corresponding high-S/N spectra, perhaps from future facilities that will have well-calibrated u-band data such as the Rubin Observatory.Furthermore, the increased variation in ∆M 1 indicates that there is value in obtaining a larger SALT3 training set in order to better constrain the first principal component of the model.Finally, when the mis-calibration effect is simulated in the SN spectra, we see ∼2-5% offsets in U and I, which are near the blue and red ends of the model's spectral range, respectively, and which may be a result of limitations in the SALT3 re-calibration procedure.
In Figure 5 (right) we examine the effect of including host-galaxy contamination in the SALT training spectra in more detail by showing model flux variations from the 100%, 50%, and 10% contamination training variants.While HOST-10 has variation at the level of ∼3% across most of the phase range, higher contamination yields further degradation in the training when using SN spectra with strong host-galaxy contamination.The HOST-50 and HOST-100 variants show color-dependent offsets of ≳5% in the U , R, and I band model fluxes; we note that because the model is normalized to maximum light in the rest-frame B-band, this band shows smaller variation at the few-percent level.High-z SNe in particular tend to be fainter relative to the local surface brightness of their host galaxies, which can result in substantial host contamination; this level of bias in the model surfaces shows that high-z SN spectra must have their host-galaxy contributions carefully removed to avoid model biases.
Color law
The change in color laws of each data variant are shown in Figure 6.We describe the color law change with respect to the baseline with the following equation: Here, ∆m(c, λ) is the change in magnitude as a function of wavelength for a SN with color c relative to the baseline model.We choose a nominal c = 0.1 for the comparisons in Figure 6, equal to a difference in reddening between the B and V bands, compared to an average SN Ia, of 0.1 mag.Typical SN Ia cosmology cuts limit c < 0.3, so we note that the reddest -and the bluest -SNe Ia in a typical data set would be biased by three times the values shown in this figure.
Similarly to the spectral components, the color laws are consistent to 2% or better for c = 0.1 between 3000 Å and 7500 Å, but have larger deviations in the bluer and redder regions.The SALT3 color law is extrapolated beyond 8000 Å, and these extrapolated regions at redder wavelengths in particular can deviate by up to ∼10% when including host contamination effects, and therefore should not be considered robust to reasonable variations in the training data.
Differences in distance moduli
We show the changes in distance moduli for each data variant with respect to the baseline in Figure 7, and for BYOSED simulations in Figure 8.For z < 0.6, the distance modulus changes are below 0.025 for each of the input data variants.Larger changes of ∼0.5 are seen for z > 0.6, with the largest changes coming from MIS-CAL-SPEC and HOST-50.
The RMS of the Hubble residuals is consistent to within ∼2% across all variants that are simulated with the extended SALT2 model, with the MIS-CAL-SPEC model having the largest scatter, 0.149 mag, but negli-gibly higher than the baseline value of 0.147 mag.For the BYOSED variants, the RMS of the Hubble residuals are higher, ∼0.245.The reduced χ 2 of the SALT3 light curve fitting is also consistent across all the variants that are simulated with the extended SALT2 model.By excluding the error term due to the in-sample variancethe degree to which the SALT3 formalism does not fully encapsulate the intrinsic variability of the underlying SN data -we compare reduced χ 2 across training options.The baseline model has a median reduced χ 2 = 0.95, which could indicate slight under-regularization in the training procedure or very modest overestimation of the uncertainties.
Most training options yield a comparable reduced χ 2 to the baseline case, but the variants NO-U, HOST-10, HOST-50 and HOST-100 yield a slightly higher reduced χ 2 , ranging from 0.98 to 1.01.MIS-CAL-SPEC has a smaller reduced χ 2 = 0.92.The BYOSED models have slightly larger median reduced χ 2 ∼ 1.08, perhaps a consequence of these models being semi-independent of the SALT model framework.
In spite of the increased χ 2 from some variants, it is encouraging that the distances change very little on average.Biases in distances relative to the baseline model, averaged in redshift bins of 0 < z < 0.2 versus 0.4 < z < 0.6, are shown in Table 2.All biases, except HOST-100 or MIS-CAL-SPEC, are consistent with zero with an uncertainty of less than 0.01 mag, demonstrating that the training procedure remains effective at standardizing its training sample, even if the model surfaces themselves are shifting on the few-percent level.HOST-100 and MIS-CAL-SPEC have a bias ∼ 0.02 and −0.01, respectively.
Biases on cosmological and nuisance parameters
Despite the apparent shifts in M 0 and M 1 described in Section 4.1.2,we find that inferred distances are consistent.This may be due to the compensating shifts in α and β in the distance estimation stage.The differences in these nuisance parameters are shown in Table 2.We observe differences in α of ≲2σ for the SALT2-extended model simulations except for one high-significance shift of 0.014 for the HOST-10 variant.Biases in the β parameter are within 2-sigma significance for all variants simulated with SALT2-extended, with a magnitude of up to 0.092.
Finally, Table 2 shows the differences in measurements of w from these different training variants.All are consistent with the baseline model at the ∼2-σ level (∆w ≲ 0.025), but with the largest potential deviations coming from variants that adversely affect the reliability of the training spectra (including the NO-LOWZ variant, which removes many spectra).
We compare the BYOSED models with BYOSED-STRETCH-COLOR, and show the differences in w in Table 3.All values of w are consistent to within 0.025, showing that the SALT3 training procedure can largely account for the effect of possible correlations between the host galaxy and the SN Ia SED, or additional SN properties such as SN velocity.
DISCUSSION
In this paper, we test the robustness of the newly developed SALTShaker code and the SALT3 model, and quantify the systematic uncertainties of the training procedure on cosmological parameter measurements.We explore the effect of removing legacy low-z data without measured filter throughputs, observer-frame u/U data (which is poorly calibrated in the real training data), and a large fraction of the spectroscopic training data.We also test the consequences of mis-calibrated SN spectra and the effects of host-galaxy contamination in the spectra.Finally, we use a simulated model independent of SALT (BYOSED; Pierel et al. 2021) to test whether a SALT3 model trained on these data can produce consistent distances from 0 < z < 1.
We generally find better than 2% consistency of the model across these different training variants.The most significant changes in the model are seen in the U band, in pre-maximum light phases, and in the redder and bluer edges of the model surfaces.Host-galaxy contamination at 10% of the SN maximum brightness does not produce large changes in the trained model; however, host contamination at the level of > 50% of the SN a Relative to the baseline fitting results, the difference between average Hubble residual at 0.01 < z < 0.2 and the average Hubble residual at 0.4 < z < 0.6.We also see evidence that the 1990s-era legacy data and the observer frame U band data are playing important roles in constraining the model surfaces and the color law.It appears that individual SNe with photometry spanning the full available model wavelength range (U/u to I/z) may also improve the fidelity of the model, and the existing model training sample may need to be expanded to fully replace these data.We suggest compiling additional training samples, particularly at low redshift, from CSP and Rubin Observatory photometry that include uBV gri and ugriz (and perhaps y) data, respectively, for the same SNe; this will help to fully constrain the model surfaces and color law simultaneously and allow us to remove the less-reliable low-redshift training samples.SN spectra also appear to be important for constraining the model surfaces in the SALT3 training process, particularly at the bluest and reddest ends of the wavelength range; when half of our spectral training sample are removed, we find model variations on the order of up to ∼5-10%.
Although these different training options demonstrate the phases and wavelength ranges where the SALT3 training is less robust, we see consistent distance measurements across nearly the full redshift range, as well as consistent cosmological parameter measurements to within 2% in most cases; the MIS-CAL-SPEC variant has the largest deviation of 0.025 at 1.9-σ significance.Provided that the cosmological parameter estimation is run on data drawn from the same intrinsic distribution as the training data -i.e., it is important that the SALT3 model is re-trained on the data used in a given cosmology analysis -we find that cosmological parameter measurements from the SALT3 model are robust at the 2% level to most realistic variations in the SALT3 training process.However, we suggest that additional attention must be paid to the way in which SALT3 recalibrates spectra during the training process, and to the contamination of the high-redshift training spectra by host-galaxy light.Additionally, we find that the SALT3 training process is sensitive to the color distribution of the input training data, and the resulting w measurement can be biased by ∼ 2% if the color distribution is not sufficiently wide.
Finally, we suggest using the pipeline developed in this work, or similar approaches, to propagate variants like these into the systematic uncertainty budgets of future cosmological analyses.We can only be confident in our constraints on dark energy if we fully understand the assumptions that are propagated into SN standardization modeling.Fortunately, our analysis indicates that these types of systematic uncertainties in the training process bias w at a level below the precision of current analyses.However, as hundreds of thousands of additional SNe Ia are discovered in the coming years, w measurements become more precise, and these new data simultaneously improve our models for estimating SN distances and extend the viable wavelength range at which measuring SN Ia distances is possible, these types of end-to-end systematic uncertainty tests will become increasingly important.M.D. is supported by the Horizon Fellowship at the Johns Hopkins University.Support for D.O.J. was provided by NASA through the NASA Hubble Fellowship grant HF2-51462.001awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.Research on systematic uncertainties in supernova cosmology at Rutgers University is funded by DOE awards DE-SC0011636 and DE-SC0010008.The UC Santa Cruz team is supported in part by NASA grants 14-WPS14-0048, NNG16PJ34C, and NNG17PX03C, NSF grants AST-1518052 and AST-1815935, the Gordon and Betty Moore Foundation, the Heising-Simons Foundation, and by a fellowship from the David and Lucile Packard Foundation to R.J.F.This work was completed in part with resources provided by the University of Chicago's Research Computing Center.2013,2018,2022), Matplotlib (Hunter 2007), NumPy (Harris et al. 2020)
B. BIASES IN W FROM THE BASELINE SIMULATIONS
Our baseline simulations described in Section 3.1 (which use x 1 /c populations derived by Scolnic & Kessler 2016) yield a w bias of −0.024 ± 0.006.In order to investigate the origin of this bias, we create another set of baseline training set simulations following K21, which uses the x 1 and c parameter values from the actual K21 data sample.For this sample, we find a statistically insignificant w bias of w = −1.005± 0.009 (averaged on 20 random samples).
The above change was applied only to the legacy lowz simulations because we found that simulations based on the Scolnic & Kessler (2016) x 1 /c populations did not fully match the K21 data (Figure B.1).We note that this is not a deficiency of the Scolnic & Kessler (2016) results; rather, the K21 training data includes additional SNe that were not considered in Scolnic & Kessler (2016).These legacy low-z data, while a limited subset of the training, contain a majority of the spectra used in the training and also constitute some of the highest-S/N and best-sampled photometry.
Figure B.1 shows the color distributions of the low-z training data from our original baseline samples, the regenerated K21-like baseline samples, and the K21 data.We find that the K21-like simulations (and the K21 data) have significantly more SNe with blue colors compared to the original simulations, likely due to the addition of new data from the CfA4 survey, the CSP survey, and z < 0.01 SNe that are too nearby to be suitable for dark energy measurements (but can be used for lightcurve training).
An alternative source of bias could be differences between x 1 and c populations in the bias correction simulations versus the "test" simulations.Because the SALT training process enforces definitions of mean x 1 , c = 0, 0, subtle shifts in the simulated shape/color parameters of the simulated sample can cause mismatches between the training sample and our adopted x 1 /c distributions for the bias correction simulations.However, slight changes in the mean of the c distribution (∼0.01) to make up for these effects had no significant effect on the w bias.We therefore infer that the difference in w stems from differences in the two trained SALT3 models themselves, and that the trained models are sensitive to the color distributions of the input training data, though in future work we will re-compute x 1 and c populations as part of our pipeline for each individual data set following the method of Scolnic & Kessler (2016).Fig B .2 shows the differences in the averaged trained model components trained with these two samples.We find a > 5% difference in the rest-frame U band, and a slight difference in the color law.The color laws trained from these two samples are similar, with a slight difference for λ > 7000 Å.We note that although on average the color laws are not significantly different between these two sets of simulations, we find that the SALT3 color law is particularly sensitive to the c range of the training data for each individual random sample.We advise that future model trainings use a training sample with a wide distribution of colors (similar to K21) in order to avoid subtle cosmological biases such as the one found here.
Figure 2 .
Figure 2. Comparison of light curve parameter distributions between the average of 40 baseline simulations (Section 3.1) and real K21 data.Blue histograms are the simulated baseline data, with the shaded area being the standard deviation of the number in each bin from the simulations, and red histograms are the K21 data.
Figure 3 .
Figure3.Examples of simulated SN and host-galaxy spectra.Blue: simulated final SN spectra with host-galaxy contaminations added; from left to right the host-galaxy fractions are 100%, 50% and 10% relative to the SN peak brightness.Orange: the simulated host-galaxy spectra for each SN, scaled relative to the SN peak brightness.The uncontaminated SN spectra are simply a subtraction between the final SN spectra and the host-galaxy spectra.
Figure 4 .
Figure 4. Relative changes of model components M0 and M1 at peak for different variants of the training data with respect to the baseline M0 component, with the gray area showing the error of the mean of the baseline M0 component.Left: model changes for the following training data variations: removing the legacy low-z data, removing the u/U bands, adding calibration offsets to the training spectra, and removing 50% of the spectral data.Right: model changes after including 100%, 50% and 10% host-galaxy contamination in the spectral data, with contamination scaled relative to the SN brightness at maximum light.
Figure 5 .
Figure 5. Same as Figure 4 but showing relative changes of integrated model fluxes F0 and F1 with respect to the baseline model flux F0.Individual colors show the model components when integrated over the U BV RI passbands.
Figure 6 .
Figure 6.Difference in color law for each training data variation with respect to the baseline.The coefficient 0.1 is chosen to show the change in predicted magnitude as a function of wavelength for a SN with c = 0.1.
Figure 7 .Figure 8 .
Figure7.Changes in the distance moduli as a function of redshift for each training variant, with respect to the baseline.Left: distance modulus changes for the following training data variations (from top to bottom): removing the legacy low-z data, removing the u/U bands, including mis-calibrated spectra, and removing 50% of the spectral data.Right: distance modulus changes when including 100%, 50% and 10% host-galaxy contamination, with respect to SN brightness at maximum light, in the input spectral data.
Astropy (Astropy Collaboration et al. Figure B.1.Color distributions of the low-z training data from the original baseline simulations (with low-z x1/c parameters drawn randomly from previously generated populations, blue), the regenerated K21-like simulations (with K21 low-z x1/c parameters, green), and the K21 data (red).The shaded areas correspond to the standard deviation of the number in each bin for the multiple simulations.
Figure B. 2 .
Figure B.2. Difference in the averaged SALT3 model flux (integrated over UBVRI bands) between the two samples described in Appendix B -the original baseline samples and the regenerated K21-like samples -relative to the M0 flux of the original baseline samples.
Schematic overview of our analysis pipeline, from simulation of the underlying SED model to cosmological parameter constraints.The arrows show the flow of the pipeline and how each stage is connected.
Table 1 .
Short names for each training variant
Table 2 .
Biases on Nuisance Parameters and w for SALT-based Variants
Table A1 .
Asymmetric Gaussian parameters for x1 and c populationsNote: Following Scolnic & Kessler 2016, x1 /c is the value with the maximum probability for the asymmetric gaussian x 1 /c distribution, σ + and σ − are the corresponding gaussian widths on the low and high sides. | 10,950.8 | 2022-12-13T00:00:00.000 | [
"Physics"
] |
Effect of the Amount of Ephestia kuehniella Eggs for Rearing on Development, Survival, and Reproduction of Orius laevigatus
Simple Summary Economical mass rearing of natural enemies is pivotal for their use as augmentative biological control agents. To this end, the conditions for mass production have been widely investigated for different natural enemies, including the predator Orius laevigatus. The key component for cost optimization of production systems is the type and amount of diet supplied. In the case of O. laevigatus, Ephestia kuehniella eggs have proved to be nutritionally superior to other cheaper natural and artificial foods. Consequently, the current practice in industrial production is the use of these eggs as food. Therefore, there is a need to know the minimum amount of Ephestia eggs needed per individual leading to a cost reduction, which in turn will favor biological control adoption. In this study, we established a minimum of 1 and 3 Ephestia eggs per day for the first two nymphal instars, respectively, and 8 eggs per day for subsequent development until adulthood. Benefits and savings for industrial production of O. laevigatus and its use in biological control are discussed. Abstract Orius laevigatus is a key tool for the success of augmentative biological control programs in protected crops. This biological control agent is mass-reared feeding on eggs of Ephestia kuehniella. However, this factitious prey is expensive, accounting for a significant percentage of the rearing costs. Therefore, there is a need to optimize the amount of Ephestia eggs needed per individual leading to a cost reduction, which in turn will favor biological control adoption. This study investigated the effect of the amount of Ephestia eggs provided on the developmental and reproductive fitness of O. laevigatus. At least a daily supply of 1 and 3 Ephestia eggs was needed for optimal development of the first two nymphal instars, respectively, although for maximum survival, 1 egg was enough for both instars. For subsequent development until adulthood, a minimum of 8 eggs per day were needed to fully support growth, but only 3 eggs for optimal survival. Similarly, male body size was also maximized by feeding 8 eggs, but for maximum female body size 10 eggs per day were required. Oviposition rate of females increased with the daily number of Ephestia eggs provided, until a plateau was reached at 8 eggs/day. Benefits and savings for industrial production of O. laevigatus are discussed.
Introduction
Orius laevigatus (Fieber) (Hemiptera: Anthocoridae) is a key tool for the success of augmentative biological control programs in protected crops in Europe, Africa, and Asia [1][2][3]. This generalist predator is widely used to control small pests, especially against thrips such as Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae) [3][4][5][6]. As a biological control agent, it is mass-reared, feeding on eggs of Ephestia kuehniella Zeller (Lepidoptera: Pyralidae) (hereafter Ephestia eggs). However, this factitious prey is expensive, with market prices varying around 600-800 €/kg [7], accounting for a significant percentage of the rearing costs. Therefore, there is a need to optimize its mass scale production leading to a cost reduction, which in turn will contribute to a wider biological control adoption. Different attempts have been conducted in order to find a cheaper food source, such as pollen, artemia cysts, or several artificial diets [8][9][10][11][12][13][14][15][16][17][18][19][20]. However, no other food provided the quality of Ephestia eggs. Another approach is to optimize the amount of Ephestia eggs needed per individual for survival and development from egg to adult and for oviposition upon adult emergence.
Studies of the effects of Ephestia eggs density on the developmental performance of other Orius species has been reported [21,22]. However, those works did not study the effect on each nymphal stage, just for the whole development from egg to adult. Yet, it is expected that younger instars will require less food supply than older (larger) stages. Therefore, fine-tuning the amount of diet for each immature stage will further optimize the production costs.
The current study investigated the effect of the amount of Ephestia eggs provided on the developmental and reproductive fitness of O. laevigatus in order to assess the optimal number of eggs per day for each life stage for cost-effective mass production.
Materials and Methods
A commercial population of O. laevigatus was purchased from Agrobio SL (Almeria, Spain, ORIcontrol ® ). This population was reared in the laboratory by using 1-L plastic containers with filter paper on the lid, with ad libitum access to frozen Ephestia eggs as food, pieces of green bean pods as moisture source and egg-laying substrate, and black wheat husk as hideout to avoid cannibalism. The containers were maintained under controlled conditions at 26 ± 1 • C, 65 ± 5% rh, and L16:D8 light regime. The initial population was reproduced in the laboratory for 2-4 generations before the experiments and maintained with a number of individuals over 1000.
Fresh bean pods were introduced in the containers for the females to lay eggs. After 12 h, the bean pods were collected and placed in polypropylene 30 mL cups with Ephestia eggs, assessing emergence daily. Newly hatched (<24 h) nymphs (N1) were individually transferred to 5 mL-plastic vials with a section (2 mm diameter, 20 mm length) of cotton petiole (Gossypium spp.) inside to provide moisture and covered by a lid. Sixty individuals were used per prey density: 1, 3, 5, 8, and 10 Ephestia eggs per day. Every day, the corresponding new eggs were added, and development and mortality were checked under a stereoscopic microscope (Leica Microsistemas, Hospitalet de Llobregat, Spain). Upon adulthood, the individuals were frozen and the width of the pronotum was measured with an optical micrometer (Leica Microsistemas, Hospitalet de Llobregat, Spain at 50×. To study the effect of the amount of prey on fecundity, freshly emerged adults (<24 h old) were sexed and 30 pairs of both sexes per diet treatment were isolated in polypropylene 30 mL cups with ventilated lids with a piece of green bean pod end-sealed with paraffin wax as an egg-laying substrate. Well-fed adults from the laboratory colony were used to avoid any carry-over effect from immature stages. The male was removed after an hour once mating took place. Prey density treatments were 0, 2, 5, 8, and 12 Ephestia eggs per day. Every day, new food was added, and the bean pod section was replaced, counting the number of eggs oviposited. Fecundity was assessed until day 10 since mating as early fecundity has been reported as a good predictor for lifetime fecundity [17,21,23].
Differences among prey densities in developmental times, survival, adult body size, and early fecundity were analysed using one-way ANOVA. Assumptions of normality and homogeneity of variances were checked prior to the analysis. When significant differences between diets were observed, means were separated using Tukey's HSD test.
Results
The duration of nymphal development of O. laevigatus was significantly affected by the amount of Ephestia eggs provided per day ( Figure 1, Table 1). Individuals supplied with only 1 egg showed the longest developmental time (16.7 ± 0.3 days). In contrast, those supplied with 8 and 10 eggs per day exhibited the shortest duration from N1 to adults (9.6 ± 0.1 and 9.7 ± 0.1 days, respectively), with the treatments of 3 and 5 eggs in an intermediate position (11.0 ± 0.1 and 10.2 ± 0.1 days, respectively). The duration of the youngest nymphal stage was no different among treatments, but the difference among diets increased with age. From second nymphal instar (N2), the treatment with the lowest amount (1 egg/day) showed significantly longer developmental times. The differences were more marked for the last nymphal stage (N5), with developmental times for 1 egg/day more than twice as those observed in the other treatments. The diets with 3 and 5 eggs/day resulted in longer developmental times, especially in the oldest nymphal stage (N5). The amount of Ephestia eggs per day provided to the adult female impacted fecundity ( Figure 3). When provided with no prey, the females showed an average fecundity of only 5.0 ± 1.8 eggs per female. Conversely, the females supplied with 8 and 12 eggs per day exhibited the highest fecundity, laying 46.8 ± 6.0 and 49.0 ± 7.2 eggs per female, respectively.
Regarding the percentage of females laying eggs, only 30% of the females with no prey available were capable to oviposit, a percentage significantly lower than that for the females provided with 5, 8, and 12 eggs (70-80%), with those supplied with 2 eggs in an intermediate position (55%).
Discussion
Economical mass rearing of natural enemies is pivotal for their use as augmentative biological control agents. To this end, the conditions for mass production have been widely investigated for different natural enemies, including O. laevigatus. The key component for cost optimization of production systems is the type and amount of diet supplied. In the case of the Orius species, Ephestia eggs have proved to be nutritionally superior to other cheaper natural and artificial foods [8][9][10][11][12][13][14][15][16][17][18][19][20]. Consequently, the current practice in industrial production is the use of these eggs as factitious prey. However, as far as we know, there is no published work on the minimal daily food supply for the different life instars of O. laevigatus, which should provide essential information for cost-effective mass-rearing.
In our study, at least a daily supply of 1 and 3 eggs of flour moth was needed for optimal development of the first two nymphal instars (N1 and N2), respectively, although for maximum survival, 1 egg per day was enough for both instars. For subsequent development until adulthood, a minimum of 8 eggs per day were needed to fully support growth, but only 3 eggs per day for optimal survival.
A previous work studied the minimum amount of Ephestia eggs for development and survival of O. sauteri (Poppius) from egg to adult [21]. They recorded a minimum of 7.5 eggs per day (30 Ephestia eggs per four days) for optimal developmental rate. Similarly, another study reported a daily supply of 8 Ephestia eggs for maximum development and survival of O. insidiosus (Say) [22]. Both data are fully consistent with our findings regarding an optimum of 8 Ephestia eggs per day for maximum developmental rate of O. laevigatus from egg to adult. However, we went further-minimizing costs by lowering the minimum amount for the youngest nymphs with 1 egg/day for N1, 3 eggs/day for N2 and then 8 eggs/day for N3 and ensuing instars.
Considering the duration of the development and the optimal amount of Ephestia eggs, 76.8 eggs (8 eggs/day × 9.6 days) will be needed to obtain an adult of O. laevigatus. This amount is much lower than that considered standard for Orius rearing (216 Ephestia eggs) [24], or that obtained as the pre-imaginal predation capacity (174.6 eggs) [25]. However, the minimal amount of prey for maximum growth is expected to be lower than the number of prey an individual is capable to eat. Extracting the data from the respective works, 92 eggs (8 eggs/day × 11.5 days) and 102.7 eggs (7.5 × 13.7) will be required for a female of O. insidiosus [22] and O. sauteri [21]. This suggests that O. laevigatus is expected to be cheaper to produce. For a market price of Ephestia eggs (600-800 €/kg) [7], and an average number of 36000 eggs per gram, the cost of the diet for producing O. laevigatus will range 1.28-1.71 Euros per 1000 individuals.
However, taking into account the specific needs and duration for each nymphal stage obtained in the present study for O. laevigatus, the cost of the food can be optimized: 2.05 eggs for N1, 3.54 for N2, 11.84 for N3, 13.60 for N4, and 25.84 for N5, totalling 56.87 eggs. Therefore, the diet costs would be reduced to 0.95-1.26 € per 1000 females, saving 0.33-0.44 € in every bottle of 1000 Orius produced.
O. laevigatus nymphs were well adapted to survive with a very limited amount of food. They were able to complete development with a very low prey density. With a food supply of only 1 eggs of flour moth per day, 5% of the nymphs reached adulthood, but with 3 eggs/day 73% of them completed development, as much as with higher availability of Ephestia eggs. For O. insidiosus, almost 60% of individuals were able to develop from egg to adult with 1 Ephestia egg per day [22]. Similarly, in O. sauteri with a daily supply of 1.25 eggs, 42% of nymphs reached adulthood [21]. Other omnivorous predator, the mirid Macrolophus pygmaeus (Rambur) (Hemiptera: Miridae), a species with larger body size than Orius, had a 64% survival in the juvenile development feeding only 1.7 Ephestia eggs per day [7].
Moreover, with the lowest food supply (1 egg/day), the mortality from N1 to N3 was not different to superior diets in our study. For O. insidiosus it was reported that survival was not reduced in the 3 first days after hatching with a daily diet supply of 1 egg per day [22]. Three days is the duration of N1 and part of N2. Indeed, in our study, survival was maintained until N4, which accounts for around five days after hatching, suggesting a high resilience of young instars facing food shortage and improving the likelihood of finding prey. This ability to survive and complete development under prey scarcity is one of the keys for the successful establishment of O. laevigatus and other omnivorous predators when early released on the crop prior to pest outbreak, as well as for continuous presence along the crop cycle despite pest density fluctuations [26].
On the other hand, body size is an important indicator of the quality of a predator as biological control agent, as it has a profound effect on most biological and ecological traits such as fecundity, longevity, predation, resistance to starvation, etc., [27][28][29][30][31]. In addition, adult body size is determined by the nutritional characteristics (quantity and quality) of the diet ingested in immature development [8][9][10][11][12][13][14][15][16][17][18][19][20]31]. As stated above, the minimum amount of Ephestia eggs for optimal survival and developmental rate, and for cost-effective rearing, was assessed in 8 Ephestia eggs per day. Similarly, male body size was also maximized feeding 8 eggs daily, but for maximum female body size, 10 eggs per day were required. Therefore, given a balanced sex ratio typical of Orius species, a daily supply of 9 Ephestia eggs will be required for maximizing body size and hence biocontrol efficiency of the adults once released on the crop. Similarly, the amount of Ephestia eggs needed to maximize adult body size was higher for females (7.5 eggs/day) than for males (2.5 eggs/day) in O. sauteri [21], but equal (8 eggs/day) for both sexes in O. insidiosus [22].
Oviposition rate of O. laevigatus females increased with the daily number of Ephestia eggs provided, until a plateau was reached at 8 eggs/day. This amount is the same as that obtained for O. insidiosus [22], a very close to 10 eggs/day for O. sauteri (8 eggs/day was not tested, just 5 eggs/day, which also yielded an inferior oviposition in our study) [21]. Orius females are able to lay eggs with a very limited food intake, but unable in the absence of nutrition. Although in our study, O. laevigatus females laid some eggs when no Ephestia eggs were provided, this was rather a residual effect on early reproduction from nutrition during juvenile stages [22,23,31]. Orius species are income breeders, which exploit nutritional intake as adults for reproduction, instead of capital breeders, which use resources stored during immature development [23,31,32].
Conclusions
This study indicates a minimum of 1 and 3 Ephestia eggs per day for the first two nymphal instars of O. laevigatus, respectively, and 8 eggs per day for subsequent devel-opment until adulthood. However, a daily supply of 9 Ephestia eggs will be required for maximizing body size and hence biocontrol efficiency of the adults once released on the crop. This information may contribute to the reduction in rearing costs of industrial production, which in turn will favor a wider adoption of biological control as main pest control method. However, further research is warranted to validate these results in large scale production systems and over several generations. In addition, this study confirms the ability of O. laevigatus to survive and complete development under prey scarcity, which is crucial for their resilience and continuous presence along the crop cycle, despite pest density fluctuations. | 3,796.8 | 2022-03-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Ultra-short pulse laser-based fabrication process for lightweight structures in quartz glass applied for mirrors
. In the following work a manufacturing process for light weight structures in Fused Silica is presented. Such structures can potentially be used for mirrors to decrease the mass by simultaneously ensuring high stiffness. This talk should give the audience the possibility to assess the selective laser etching technology for mechanical structures in the field of optical mirrors.
Introduction
Classical manufacturing of glass-based lightweight structures is based on grinding or molding, for example, for large mirrors.In this work we will present a laserbased manufacturing process for lightweight structures (section 1.2).The advantage of the selective laser-induced etching process (SLE) used is that any 3D shape can be inscribed.The disadvantage on the other hand is that the manufacturing time increases linearly with the size of the device.Nevertheless, there are many interesting features that can only be realized with this concept.
The SLE fabrication process is based on two main steps.At first, the glass morphology is modified by the laser treatment and in a second step the glass substrate is shaped by wet chemical etching (Fig. 1).
Fig. 1. Illustration of selective laser etching process
For the modification of the synthetic quartz glass a laser with a pulse repetition rate of 750 kHz, a pulse duration of 1 ps, a wavelength of 1030 nm, circular polarization, a pulse energy of about 350 nJ and a focusing objective (20x, NA0.4,-1.1 mm cover glass correction) is used [1].The focus has a voxel size of about 3 µm, this focus has to be routed with a controlled multi-axis system to form a 3D object.Consequently, the 3D CAD design must be translated into lines and arcs.These curves are finally written into the glass and etched with hot potassium hydroxide (8mol/l, 80°C).
The advantage of this technology is, on the one hand, the design freedom to realize undercuts and freely movable parts, and on the other hand, subtractive manufacturing means that the material quality is defined by the glass and not by the machining process.As a result, of the etching there is an RMS roughness of about 1 to 2 µm observable, but there are no sharp cracks.Compression tests on small etched cylindrical pillars with a size of Ø17 µm and height of 28 µm showed that these structures can withstand a compressive stress up to 8 GPa [2].
The aim of lightweight structures is the reduction of the mass by simultaneously ensuring high stiffness.The stiffness against deflection of a part is inversely proportional to the moment of inertia of the part crosssection.The moment of inertia is defined by the equation As a result, the material further away from the center contributes more to the stiffness of the part.For this reason, the material in the center is removed in lightweight structures.[3]
SLE applied to generate light weight structures
In this work, SLE technology was applied to create lightweight structures in a fused silica mirror.Two designs with additional features were realized to demonstrate on the one hand the possibility to realize rotating elements (Fig. 2) and on the other hand a linear translation system with flexures (Fig. 4).
The mass of the moveable part of both designs is decreased by a lightweight structure on the backside of the mirror.This structure consists of honeycombs and pillars connected to the front of the sample (Fig. 3).The web width of the honeycomb in Fig. 5 is 250 µm and the structure depth is 5 mm.This corresponds to an aspect ratio of 20.The rotary axis of the design shown in Fig. 2 and 3 is realized with a plain bearing.Its disadvantage is the clearance fit, which is always greater than about 20 µm due to the etching process.Nevertheless, it can be shown that simple rotary bearings can be integrated with this technology platform.Lightweight structures with very small feature sizes and undercuts can be fabricated by using selective laser induced etching.In addition, it was demonstrated that additional features like rotation axes and solid flexures can be integrated into devices using the SLE technology platform.
Fig. 2 .
Fig. 2. Lightweight structure combined with a rotation axis
Fig. 3 .
Fig. 3. Detailed view into the lightweight structure.A honeycomb structure is visible at the top and undercuts can be seen in the center, which further reduce the weight By using two parallel bending beams as shown in Fig. 4 a translation system with two degrees of freedom can be integrated into the quartz glass.In this example the inner lightweight structure can be deflected by ±250 µm in two directions.The advantage of this system is that due to the low thermal expansion coefficient (CTE) of 1 ppm such systems are potentially very stable compared to metalbased solid flexures with a CTE of about 10 ppm.
Fig.
Fig. Plan Mirror with a lightweight structure on the backside combined with solid flexures.The flexures provide a translation range of ±250 µm in two directions.
Fig. 5 .
Fig. 5. Detail view of the lightweight structure of the mirror with the solid flexure design.On the foreground there is the honeycomb structure with a web width of 250 µm visible | 1,194.2 | 2022-01-01T00:00:00.000 | [
"Physics"
] |
Grammatical Errors Found in English Writing: A Study from Al-Hussein Bin Talal University
This study investigated the frequent grammatical errors, found in the writings of Arab students’ taking English writing courses in AL-Hussein Bin Talal University Learners’ errors were considered positively as the best sources to identify students’ limitations in English writing. Therefore the present study intended to investigate the grammatical errors of Arab students’ writings in English in AL-Hussein Bin Talal University and to see if there are any differences in the grammatical errors according to year of study. To conduct this study data was collected from the writing sessions of writing classes that were taught during the fall semester of 2019. The data was collected, analyzed and categorized from students, all majoring in English Language and Literature and ranging from freshman to seniors. A Grammar test Questionnaire designed by the researchers was distributed to the students in these writing sessions. The results showed that the most frequent grammatical error was with the verb tense on a mean of (3.75), followed by errors in the article on a mean of (3.62), wrong word order on a mean of (3.57), noun ending on a mean of (3.40) and least was sentence structure on a mean of(3.39). The results also showed that the seniors on the grammar test on all its parts did better than the freshmen, juniors and sophomores that are the least problems were found among the seniors.
Introduction
Learning a second language depends on learning the skills of listening, speaking reading and writing. These four skills allow the student to comprehend, produce and use the language in effective interpersonal communication. They are most often acquired in the order of listening, speaking, reading and writing. Listening and reading are called passive skills because learners do not need to produce language to do these, they receive and understand it. Speaking and writing are called active skills because learners doing these need to produce language (Ramadan, 2019).
Writing Skill
The most difficult of all these four skills is the writing skill. Writing skill is considered as the most difficult task for students because they are not proficient in the language and do not have good mastery of vocabulary and grammar. They have limited exposure to the language and seldom communicate using the language in daily life. For students to become good in the writing skill they must become good in their native language first (Atashian & Al-Bahri, 2018;Zerin, 2007). Teachers teach the students how to form a sentence in Arabic without putting it in context (deductive) the aim is only learning the grammatical rule. Therefore the student will not be able to apply it in his/her writing. How can we ask the students to write in English if he can't in Arabic? Usually English is taught and explained in Arabic that also causes students to have problems (Al Noursi, 2018;Marek, 2016;Khan, 2011). Even though the textbooks in the curriculum take into consideration all the skills but the problem is in the teacher and the environment. Students might learn English in the classroom even though mainly it is in Arabic but still they will not find the right environment to practice it (Kresh, 2009). All will result in students' weakness in the different skills of the language and especially in the writing skill since it is an accumulative of all the other three skills. very commonly do in the work of less proficient ESL students, whose mother-tongue "interferes" with the production of correct English. Nonnative speakers make numerous mistakes in the use of verbs (for example, incorrect tense choice, incorrect tense form), the articles (a/an, the particularly students in whose languages these words do not exist), and word order (Frankfurt international school, 2018;Nonkukhetkhong, 2013).
The most common errors among these students usually include noun ending errors which include plural or possessive ending incorrect i.e. the book were heavy; their name is joe and mike, omitted, or unnecessary, includes relevant subject-verb agreement errors i.e. she write, another article errors which include article or other determiner incorrect i.e. an orange, omitted, or unnecessary, verb error (verb tense or verb form) i.e. he play football yesterday, All errors in verb tense or form, including relevant subject-verb agreement errors, also wrong word which include all specific lexical errors in word choice or word form, including prepositions and pronoun errors and last sentence structure which also includes errors in the sentence clause boundaries (run-ons, fragments, comma splices), word order, omitted words or phrases, other unidiomatic sentence construction (Ferris & Roberts, 2001;Clark, 2007).
Arab students learning the English language as a second language tend to carry much of the Arabic structure. In dealing with the verb to be for example what happens is that "verb to be" is not used in the present tense, and no auxiliary "do " and furthermore, there is a single present tense in Arabic, as compared to English, which doesn't have the simple and continuous forms. What an Arabic speaker does is he would say, "I boy" instead of saying, "I am a boy". Another example saying "she woman" rather then, "she is a woman" and so on (BaSaeed, 2013). You can see from the examples given how the verb to be is deleted. Also the Arabic language doesn't use the present perfect tense so the students would say "I completed my paper, can you see it? Another problem in Arabic language is that it doesn't have model verbs which also cause grammatical mistakes (Jenwitheesuk, 2009;Shoebottom, 2007). Native language clearly affects learning the target language, especially at the beginning stages of acquiring it, while students rely on the grammar of their native languages to overcome any difficulties. However, this method does not always work because English and Arabic have huge differences in grammar.
Another area of grammar is adjectives. In Arabic the structure of the adjective is that it comes after the noun, this leads Arab students to make word order mistakes in written or spoken English, so when Arab students speak they tend to say "girl tall" rather than saying "tall girl", "boy thin" rather the "thin boy" and so on. This kind of structure is carried when speaking in English and therefore leads to grammatical problems (Dahami, 2012). The pronoun is another problem in English it is deleted or omitted but in Arabic it requires the pronouns in relative clauses, this also causes grammatical mistakes such as saying "where is the book which I leant it to you last week?" There are so many other problems such as genitive constructions, indefinite articles since it doesn't exist in Arabic which leads our students to omit it when writing in English, sentence structure, word order, and so many more (Alasfour, 2018). All the mentioned differences between the two language cause errors among students and that is a problem.
Problem of the Study
The researchers have noticed that students majoring in English have great problems when it comes to grammar, and they have realized that students commit so many grammatical errors in their writings, which make their writings unreadable, therefore sought to seek this research.
Research Questions
The current study was guided by the following questions: 1) What are the most frequent errors in the writings of students in Alhusssein Bin Talal University?
2) Do the grammatical errors differ according to year of study?
Importance of the Study
The researchers believe that identifying the different grammatical errors students commit in their writing will help both the students and the teachers. It will help students in that it will become a guide to help them avoid grammatical errors in their future writings. And as for the teachers it will become an input as consideration in reviewing the grammar materials especially the common errors.
Literature Review
Writing is constructed by putting sentences in sequence, one after another and it should be understandable. Meaning of the sentence should flow from one sentence to the next, carrying the argument or point of view forward in a clear and concise manner. If you do not use correct grammar in your writing, it will become hard on the reader to follow the text because the flow of meaning is interrupted.In writing the student needs to be accurate in grammar to be able to form good correct grammatical sentences (Effendi et al., 2017;Al-Srdawy & Zeidan, 1988).
Many studies have been done on the grammatical errors in writing. There are those that have focused on the ESL writing process such as the studies done by (Cambria & Guthrie, 2010;Khwaileh, 1999;Harushimana, 1999) other studies focused on the writing process among EFL students such as (El-Aswad, 2002;Mahfoudhi, 2003) and other studies worked on comparing between writing in the students native language and writing in the target language such as the studies done by (Al-Amer, 2001;El-Mortaji, 2001). And following are a number of studies that have touched somewhat on the writing process. Alfaqiri (2018) conducted a study to explore the writing difficulties and challenges that Saudi Arabian English as a second language learners experience at different levels of proficiency. The participants of the study consisted of 114 Saudi Arabian students between the ages of 15 and 36 (53 female, 61 male).The research questions not only focused on understanding the challenges presented to the students, but, also, the metacognitive strategies that the students used to solve these challenges. The results showed that the participants lacked experience in writing English and, as such, experienced grammar issues.
A study done by Sönmez and Griffiths (2015) examining the grammar errors in written work made by 30 freshman students in the English Language Teaching Department at a Turkish university. The students were divided into two groups (each consisting of 15 students), with one group at a higher level than the other, according to the results of their proficiency exam. The students' written work was marked and the grammar errors were noted down. An analysis of the errors revealed the following error types were: plurality, articles, subject-verb agreement, word order and pronouns.
A study by Nonkukhetkhong (2013) investigated grammatical errors made by first year English major students at Udon Thani Rajabhat University. The errors found were verbs, nouns, possessive case, articles, prepositions, adjectives, adverbs, sentence structure, ordering. A similar study was conducted by Ghabool, Edwina, and Kashef (2012) and revealed that Malaysian ESL students have problems in writing, especially in language use (grammar) and punctuation. Another study was done by Abushihab, El-Omari, and Tobat (2011), during an investigation into the written grammatical errors of Arabic students of English as a foreign language at a university in Jordan, discovered that most of the students' errors related to prepositions (26%), followed by morphological errors, articles, verbs, use of active/passive and tenses. Fageeh (2004) conducted a study on 34 male Saudi English major college students' beliefs regarding their English writing difficulties using interviews, observations and text analysis. Results of the study showed that Arab students used Arabic to develop their ideas and rehearse their English text and also they used to write the text in Arabic and then translate it into English and therefore students encountered problems in spelling, vocabulary and grammar in their writings. Another comparative study conducted by Alnofal (2004) on Arabic first language writing and English second language writing processes on a sample of 161 male and 42 female undergraduate students using online survey and Rose's writers Block test narrative task stimulated recall interviews. The results showed that Arab students seem to be influenced by their experience in their first language (Arabic) and therefore they tend to make more formal grammatical and spelling errors when writing in English than in Arabic. Also a study done by Elhami and Altaha (2000) was done to check the errors made by Saudi students in both their production and recognition of grammatical structures in English. The results showed that the number of errors that Saudi students make in production is higher than that in recognition. This means that Saudi students are less competent in grammatical production than in grammatical recognition. Another study was carried out by a number of ESL teachers on their ESL students where they got them to write in journals since journals help ESL student on different levels. Teachers have done this in order to help students improve their writings since they make a great deal of spelling and grammatical mistakes. Students usually find it easier for them to try to express themselves in writing rather than in speaking. Therefore journals can help them improve their mistakes (Spack & Sadow, 1983).
Method
In order to arrive at the purpose of this study, the researchers selected three writing sessions for the study sample during the academic year of 2019, the writings from these sessions were collected and linguistically analyzed and evaluated by the researchers and some English Language professors before it was statistically analyzed.
Study Sample
The study sample consisted of three writing sessions in AL-Hussein Bin Talal University during the fall semester of 2019. A total of 87 students participated in the study (39) male students and (48) female students all majoring ies.ccsenet.org International Education Studies Vol. 13, No. 9;2020 in English Language and literature, ranging from freshmen to seniors: Freshmen students (14), juniors (18), sophomores (32) and seniors (23). So the researchers here depended on using convenience sampling, because the subjects were chosen simply because they are the easiest to obtain for the study.
Procedures
The data was collected during the sixth week of the Fall semester 2019. The analysis of the students' written work errors was done in several steps: The researchers started by collecting data, which was the students' written work. The second step was to identify the grammatical errors found in the students' writings and that was done with the help of checklists in order to classify different types of errors made by the respondents. The third step was the classification of these errors in the categories of Verb error (VT), Article Errors (ART), Wrong Word (WW), Noun Ending Errors (NE) and Sentence Structure (SS), these were the main categories that appeared through the analysis. Fourth, the researchers gave their analysis to a group of English Language professors who revised, edited and supervised it. Finally a questionnaire was designed to include these categories with a number of sentences following each category.
Instrument
The instrument used in this study was designed by the researchers. It consisted of two parts: the first contains general information about the participants: gender, and year of study. The second part is a questionnaire to measure the grammatical errors found among the participants which were Verb error (VT), Article Errors (ART), Wrong Word (WW), Noun Ending Errors (NE) and Sentence Structure (SS).
The questionnaire consisted of 30 items distributed on five areas of grammatical errors which include 6 items on each.
Reliability and Validity of Instrument
To make sure of the validity of the questionnaire designed, it was given to a number of the faculty members in the English department in Alhussein Bin Talal University. None of the faculty members gave any comments on adding or deleting anything in the instrument. So the researchers depended on using content validity in order to ensure the questionnaires' validity.
And to check its reliability, in order to achieve the goals of this study, the instruments were distributed on a population of fifty (32) students who were not included in the study at hand, and that was done by doing Cronbach Alpha(α) for the grammatical errors used in the instrument.
Results
The data was entered on the SPSS program and the results of the means and standards deviation and One Way ANOVA were calculated for the Grammar questionnaire.
In order to answer the first research question of the study which states "What are the most frequent grammatical errors in the writings of students in Alhusssein Bin Talal University" the means and standard deviations of the grammatical errors were done and ordered as shown in Table 2. ies.ccsenet.org International Vol. 13, No. 9;2020 The results in Table 2 shows that the most frequent grammatical error was with the verb tense on a mean of (3.75), followed by errors in the article on a mean of (3.62), wrong word order on a mean of (3.57), noun ending on a mean of (3.40) and least was sentence structure on a mean of (3.39).This agrees with a study done by Li et al. (2016) on Grammatical Mistakes in College English Writing: Problem Analysis, Reasons and Solutions, and they found that students make mistakes in verbs, nouns, articles, sentence structure and more. It also agrees with the study done by Leman (2014) on finding the Common grammatical errors in students writing .his results showed that students make mistakes in word order, article, subject very agreement, verb tense and more. Also Sawalmeh and Hassan (2013) analyzed written English essays of preparatory students in Saudi Arabia. He analyzed 32 essays of written by 32 Arabic-speaking Saudi learners of English. All of the errors in essays were identified and classified into different categorizations. The results showed that the Arabic speakers in this study committed ten common errors. These errors were: verb tense, word order, singular/plural form, subject verb agreement, double negative, spellings, capitalization, articles, sentence fragments and prepositions. He found out total of 1422 errors. The most predominant errors were Verb tense agreement 235 (16%), word order 156 (10%), subject/verb agreement 1114 (8.2%).
To answer the second research question which states "Does the grammatical errors differ according to year of study?" both means and standard deviations were done as shown in Table 3 below. It is clear from Table 3 above that there are statistical differences in the grammatical errors found among students according to year in favor of the seniors who made the least errors .The seniors did better than the freshmen, juniors and sophomores in the grammatical errors in all types as shown above. This could be due to the fact that seniors are in their fourth year and last year of study and that could mean that they have taken so many courses which helped develop their English language skills and therefore were better than the freshman, juniors and sophomores. In order to see if these statistical differences are significant a One Way ANOVA test for the means of the study participants on the grammatical errors according to year of study were done as shown in Table 4 below. ies.ccsenet.org International Education Studies Vol. 13, No. 9;2020 Table 4 above shows that there are significant statistical differences at the level of (α=0.05) on all the grammatical errors, to determine the differences between the year of study Freshmen, Juniors, Sophomores and Seniors, Scheffe test was done as shown in Table 5 below. Table 5 above shows that there are significant statistical difference at the level of (α=0.05) between the year of study in favor of the seniors. The means for this group was (3.3), for the Sophomores (3.5), for Juniors (3.6) and for the Freshmen (3.7).This result agrees with a study done by Sönmez and Griffiths (2015) who conducted a study on the grammar errors in written work made by 30 students in the English Language Teaching Department at a Turkish university. The students were divided into two groups (each consisting of 15 students), with one group at a higher level than the other, according to the results of their proficiency exam. The students' written work was marked and the grammar errors were noted down. An analysis of the errors revealed the following error types: plurality, articles, subject-verb agreement, word order, pronouns, prepositions, part of speech, auxiliaries, negation, tense agreement and lexis. Of these, most errors were made overall with parts of speech, plurality, ies.ccsenet.org International Education Studies Vol. 13, No. 9;2020 subject-verb agreement, prepositions and articles. Although, as might have been predicted, the students in the lower group made more errors (the average of 3.5) than those in the higher group (the average of 2.8).
Limitations
The results of the current study are determined by a number of determinants that may hinder the generalization of its results.
Objective limits: The study was limited to studying the Grammatical Errors found in English Writing: A study from Al-Hussein Bin Talal University.
Place limits: The study was applied in Al-Hussein Bin Talal University in the Kingdom of Jordan.
Time limits: The study was applied in the fall semester of the academic year 2019.
Sample size limits: The study was applied on students taking writing during the application of the study.
Discussion
The study aimed to investigate the grammatical errors of Arab students' writings in English in AL-Hussein Bin Talal university by identifying, and categorizing the grammatical errors in English essay writings among these students. The results showed that the seniors, on the grammar questionnaire did better than the freshmen, juniors and sophomores, This agrees with the study done by Sönmez and Griffiths (2015) and Bataineh (2005).But from the study and the many studies done on the grammatical errors among students it is obvious that Writing in a foreign language is not an easy task for either students or teachers. In order to be successful in the foreign language writing classroom, both the teachers and the students should be aware of the possible difficulties they can face .In order to be able to help their students, writing teachers should prepare themselves well to overcome these difficulties by means of effective and planned instructional strategies.
Conclusion
This research was meant to discover the types of grammatical errors dominantly found in the writings of Alhuseein Bin Talal University students. Based on the results, the most frequent errors were, with the verb tense on a mean of (3.75), followed by errors in the article on a mean of (3.62), wrong word order on a mean of (3.57), noun ending on a mean of (3.40) and least was sentence structure on a mean of (3.39).and regarding the errors of grammar made according to the year of study, the least errors made on all the grammatical errors were in favor of the seniors.
Derived from the results this study can help teachers to help students overcome many of the errors they have specially when they are aware of the grammatical errors the students commit, this will assist teachers to adopt and implement effective techniques and systems for language teaching which can simultaneously render students awareness of the differences of the two languages which usually causes such grammatical errors.
Recommendations
After conducting this study, a number of recommendations were suggested by the researchers, first students should be encouraged to pay more attention to their grammatical mistakes and that could be done though enriching their knowledge in English and differentiating between their first language's rules and the English language they learn. Also students should be encouraged to read more because this will help them in leaning more and reducing the grammatical mistakes. It is also recommended that iv students should be encouraged to examine their written texts and try to find out by themselves the grammatical errors and correct them.
Future Research
The study could be conducted on larger group and at different educational institutions to test the external factors which influence learner language performance. Also it can be conducted on participants from different ages. This allows comparison in grammatical errors in writings committed by Arab learners of English from different ages. | 5,427.8 | 2020-08-24T00:00:00.000 | [
"Linguistics",
"Education"
] |
Impact of explainable artificial intelligence assistance on clinical decision-making of novice dental clinicians
Abstract Objective Despite artificial intelligence (AI) being used increasingly in healthcare, implementation challenges exist leading to potential biases during the clinical decision process of the practitioner. The interaction of AI with novice clinicians was investigated through an identification task, an important component of diagnosis, in dental radiography. The study evaluated the performance, efficiency, and confidence level of dental students on radiographic identification of furcation involvement (FI), with and without AI assistance. Materials and Methods Twenty-two third- and 19 fourth-year dental students (DS3 and DS4, respectively) completed remotely administered surveys to identify FI lesions on a series of dental radiographs. The control group received radiographs without AI assistance while the test group received the same radiographs and AI-labeled radiographs. Data were appropriately analyzed using the Chi-square, Fischer’s exact, analysis of variance, or Kruskal–Wallis tests. Results Performance between groups with and without AI assistance was not statistically significant except for 1 question where tendency was to err with AI-generated answer (P < .05). The efficiency of task completion and confidence levels was not statistically significant between groups. However, both groups with and without AI assistance believed the use of AI would improve the clinical decision-making. Discussion Dental students detecting FI in radiographs with AI assistance had a tendency towards over-reliance on AI. Conclusion AI input impacts clinical decision-making, which might be particularly exaggerated in novice clinicians. As it is integrated into routine clinical practice, caution must be taken to prevent overreliance on AI-generated information.
Lay Summary
Artificial intelligence (AI) is being used with increasing frequency in the healthcare field to provide earlier and easier detection of abnormalities. Although these AI systems are designed to optimize accuracy in detecting abnormalities less is known about the interaction of the clinician and system. We tested the interaction of an AI system with novice clinicians (dental students) that were attempting to diagnose abnormalities in dental radiographs. One group of the dental student participants received AI assistance, whereas the other group did not receive AI assistance. We investigated 3 primary metrics during AI and participant interactions: (1) performance, (2) efficiency, and (3) confidence. Our findings suggest that novice clinicians are more likely to over-rely on AI leading to potential lower performance when assisted with an AI system. In addition, the AI system used in this study did not improve decision-making speed or confidence in novice clinicians. Despite the limitations of this single study, those that are developing AI systems to aid in clinician decision-making should keep in mind the psychological interaction (machine/human) and end user experience that can potentially affect clinical performance and patient safety. Additionally, participants that used AI and did not use AI assistance felt that these systems have the potential to improve clinical decision-making.
BACKGROUND AND SIGNIGICANCE
Artificial intelligence (AI) is increasingly being used in the healthcare field to provide clinicians with evidence-based decision aids. 1 In the field of dental radiology, AI programs have been used to identify radiographic landmarks in orthodontic treatment planning, diagnose maxillofacial cysts and tumors, classify lesions associated with teeth, estimate the degree of alveolar bone loss, and detect dental caries among other applications. 2 Ideally, the use of AI increases the efficiency and accuracy of clinical decision-making, leading to better patient outcomes. 3 However, while accuracies of AI models are often reported the accuracies during implementation clinically as a decision support system are rarely assessed. Further, despite the accuracy of AI models utilized in the identification of medical/dental conditions implementation of these systems pose potential biases and risks in clinical diagnosis. In particular, confirmation bias and the anchoring effect can affect diagnostic accuracy. 4 If a clinician suspects a particular diagnosis and an AI program agrees, the clinician might fail to consider other possible diagnoses. Similarly, a clinician could become anchored to the diagnosis suggested by an AI program and only consider it moving forward. 5,6 Automation bias and complacency can also affect clinicians using AI programs. 6 Automation bias would be underestimating the AI error and thus accept AI-generated input as ground truth. An example of complacency bias would be where a clinician suspects the AI-generated input is incorrect yet declines to investigate further simply because it was computer generated. The aforementioned biases can interfere with the accuracy of a diagnosis, and clinicians might be more susceptible to these biases when presented with data from an AI program. 4,7 As the accuracy of AI-based models continues to increase, the implementation of these systems as a clinical decision support system will become increasingly important. Most studies have been consistent in their findings that the use of AI programs increases efficiency and minimizes the amount of time clinicians spend on tasks. 8,9 Therefore, AI has great potential to improve patient care, but susceptibility to potential biases must be minimized for its successful implementation in healthcare. Healthcare providers, especially those who are less experienced, can over-rely on the data provided by AI programs, leading to inaccurate diagnoses. 4 Diagnostic sensitivity can even decrease among radiologists when using AI programs compared with when they do not. 10 Thus, action is needed to reduce tunnel vision by healthcare providers and ensure that AI programs are truly enhancing clinical decision-making. Asan et al. 11 recommend cultivating a healthy level of skepticism towards AI-generated information in order to limit over-reliance on com-puter programs while utilizing their benefits. They suggest that increasing fairness, transparency, and robustness of AI programs will help create the ideal, balanced relationship between healthcare providers and AI. AI programs will continue to be refined and improved, but it is imperative to develop the correct level of trust between AI and clinicians to maximize diagnostic accuracy during clinical implementation.
Radiographic interpretation plays an important role in diagnosis of periodontitis, a common inflammatory disease that causes bone loss and eventual tooth loss. 12 Furcation involvement refers to the loss of alveolar bone between the roots of multirooted teeth and impacts the prognosis of the involved tooth. 13,14 Due to dental student's limited clinical experience, they often struggle with radiographic diagnosis of furcation involvement. This study evaluated the use of AI as a diagnostic tool for furcation involvement for dental students. A convolutional neural network (CNN) model with was designed to identify furcation involvement radiographically with an accuracy of 81% using labeling from calibrated periodontists as gold-standard. It was hypothesized that use of the CNN would increase the performance of radiographic furcation involvement identification by dental students.
OBJECTIVE
The goal of this study was to evaluate the performance, confidence, and efficiency of dental students in the identification of radiographic furcation involvements with and without CNN assistance. Additionally, the study was aimed to evaluate student perception regarding the use of AI in clinical decision-making. It was hypothesized that students with CNN assistance would have increased (1) performance, (2) efficiency, and (3) confidence than students without CNN assistance, and that students would support the use of AI in healthcare.
Study design and participants
Approval from the Committee for Protection of Human Subjects of University of Texas Health Science Center at Houston was obtained (HSB-DB-20-1358). Prior to subject recruitment, the U-Net: Convolutional Networks for Biomedical Image Segmentation 15 was used to create the CNN model to identify radiographic furcation involvements. A particular benefit of this training network is upsampling that allows propagation of context information. Therefore, training data represented is larger than in other CNN models. The model used 4 shortcut connections to increase training efficiency. Typical training and testing datasets were used with a success rate of 81% to identify furcation involvement (FI). Two surveys using 3 deidentified radiographs with maxillary/mandibular molars with/without buccal/lingual furcation involvements were composed. Of which, 4 out of the 5 teeth had been labeled by the CNN as accurate as compared with calibrated periodontists. One survey included assistance of CNNs, one did not. Only dental students with clinical patient care experiences from the University of Texas School of Dentistry were included. Twenty-two third-year students (DS3) and 19 fourth-year students (DS4) who responded to the recruitment messages were randomly assigned equally into groups that completed surveys with and without CNN assistance.
Questionnaire
The survey (Table 1) consisted of 9 questions: year of training, general confidence level regarding radiographic identification of furcation lesions before the survey, 5 questions with maxillary/ mandibular molar radiographs inquiring about the presence or absence of buccal or lingual furcation lesions ( Figure 1), postsurvey question about confidence level regarding accuracy in identification of the lesions, and postsurvey question about the usefulness of a computer program for identifying furcation lesions radiographically. Confidence level was selected from the following Likert-scale questions: (1) very unconfident, (2) not confidence, (3) neutral, (4) confident, and (5) very confident. Participants were also asked their opinion on the usefulness of a computer program in identification, participants selected if it was (1) very unlikely, (2) not likely, (3) neutral, (4) likely, or (5) very likely to help improve clinical confidence.
Statistical analysis
Prior to data collection, a power analysis was performed resulting in a sample size 20 participants per group that would provide 80% power to detect a difference in the correct responses of 20% or less between groups. The results of the surveys were statistically analyzed using R statistical software. 16 The chi-square and Fisher's exact tests were used to identify differences in accuracy and confidence (presurvey confidence and postsurvey confidence) between the test and control group as well as a subgroup analysis within third-and fourth-year students. A 1-way analysis of variance (ANOVA) was used to detect change in confidence from pre-to postsurvey compared between test and control groups. ANOVA was also used to detect statistically significant differences in the amount of time spent on each question between and within the test and control groups. Significance level of P ¼ .05 was used for all tests.
RESULTS
A total of 21 students (12 third-year dental students and 9 fourthyear dental students) completed the control survey and 20 students (10 third-year dental students and 10 fourth-year dental students) completed the test survey.
There was no significant difference in agreement between the test and control groups regarding classification of furcation involvement for all questions except question 2 (P < 0.05; Figure 2). In question 2, the control group was more likely to not identify a furcation involvement while the test group with CNN assistance was more likely to agree with CNN presented data of identifying a furcation involvement.
Participants without CNN assistance spent 91.59 6 121.67 seconds identifying the 5 furcation involvement lesions, and participants with CNN assistance spent 70.98 6 29.18 seconds identifying the 5 furcation involvement lesions. There was no statistically significant difference between groups. In a subgroup analysis, there was also no statistically significant difference in time spent identifying the 5 furcation involvement lesions between third-and fourth-year dental students (96.05 6 32.19 and 64.73 6 34.18, respectively). Time spent per question per group did not differ significantly ( Table 2), yet in a subgroup analysis for question 2, question with most deviation in agreement between groups, the third-year dental students spent more time for this question than the fourth-year dental students in the group presented with CNN assistance (19.64 6 3.30 and 11.34 6 1.58, respectively; P ¼ .02).
Presence or absence CNN assistance did not significantly affect confidence level differences before and after the survey. The presurvey confidence between control and test groups was not statistically significant (Table 3) on a scale between 1 and 5, where 1 is equivalent to very low confidence and 5 is very high confidence (3.19 6 0.85 and 3.25 6 0.89, respectively). The postsurvey confidence between CNN assistance and no CNN assistance groups was closer to approaching statistical significance, yet not significant (3.29 6 0.70 and 3.30 6 0.78, respectively).
Confidence levels were significantly different based on experience levels between third-and fourth-year dental students (Table 3). Prior to the survey third-year dental students were less confident than fourth-year dental students on their ability to radiographically identify furcation involvement (2.82 6 .083 and 3.68 6 0.65, respectively; P ¼ .002). After the survey third-year dental students were still less confident than fourth-year dental students on their ability to radiographically identify furcation involvement (2.95 6 0.77 and 3.68 6 0.46, respectively; P ¼ .007).
Interactions with CNN-assisted radiographs of furcation involvement lesions did not significantly affect perceptions on helpfulness of a computer program to improve clinical confidence. Both control and test groups felt strongly that a computer program would help clinical confidence on furcation lesion identification on a scale from 1 to 5 with 1 denoting very low agreement and 5 as very high agreement (4.38 6 0.58 and 4.05 6 1.07, respectively). The control group responses ranged from 3 to 5 and the test group's responses ranged from 1 to 5.
DISCUSSION
AI models that provide clinical decision support to clinicians traditionally focus on accuracy of categorization. Despite improvements in data and AI models, many decision support systems fail in realworld clinical applications. 17 The focus of this study was to investigate the AI-clinician interaction through accuracy, efficiency, and confidence of novice clinicians. Although a highly accurate decision support system might perform well without human interaction, we investigated the end-user accuracies that would ultimately be used to provide a diagnosis and treatment decision. These end-user accuracies could potentially impact clinical outcomes and/or patient harm particularly if misclassifications result in inappropriate treatment decisions. In our example, presence of radiographic FI is a factor that can decide periodontal surgical intervention.
Accuracy
The accuracy of identifying FI in radiographs for inexperienced clinicians was generally not affected by AI assistance. However, in this study when presented with 1 tooth that posed a higher degree of diagnostic challenge, dental students were more likely to over-rely on AI assistance compared with those students without assistance. In the radiograph ( Figure 1B, tooth on the right of the picture), the AI program had detected the presence of FI. Yet, the gold standard as labeled by 3 calibrated experienced periodontists was noted as no presence of FI. The inter-rater reliability between calibrated periodontists was 0.63 6 0.05. In labeling FI, disagreements were solved by consensus. Radiographic identification of FI can be difficult to detect in some cases between experienced clinicians, therefore despite comparisons with a true gold standard, we found that when presented with AI assistance inexperienced clinicians are more likely to be biased by additionally presented information. Potential biases are anchoring, confirmation, automation, or complacency biases. This study did not attempt to discriminate effects of particular biases; however, this might be a potential future direction to attempt to minimize overall effect of biases.
Efficiency
In this study, efficiency was measured based on completion time when presented with a diagnostic choice for individual and overall questions. Previous studies show that AI-assistance allows for faster radiographic identification for experienced clinicians. 8,9 Although the findings of this study of inexperienced clinicians did not show statistically significant differences in overall speed of completion, the group with AI-assistance had much less variability and the magnitude of completion time favored AI assistance increasing speed of identification in general.
When presented with a more challenging diagnostic choice ( Figure 1B, tooth on the right of the picture) and additional information (AI-assistance), a minimal level of experience potentially leads to increased time to diagnose using radiographs. In this study, it was found that participants with less experience in the AI-assistance condition performed slower to make a diagnostic decision. Some reasons for this effect might be due to presentation of too much visual information resulting in slower possibly less accurate decision-making. 18 Although the results and design of the study limit deriving further conclusions.
One particular limitation of the study was that the survey was remotely administered. Therefore, the experimental environment was not standardized among subjects. Additionally, the number of questions on the survey was minimized to improve participation and completion of the survey, yet limited the sample size of diagnostic questions that were analyzed.
Confidence
As expected, confidence levels were higher in the more experienced group. Despite the difference of 1-year clinical experience between groups, those in their fourth year of dental school were more confident in radiographic diagnosis of FI than those in their third year. Presence of AI-assistance did not affect confidence levels of students. Human trust is an important factor that can lead to confidence in AI to improve decision-making outcomes. 19 In this study, participants were provided with text that noted the AI had a diagnostic accuracy of 80%. Therefore, the experimental conditions might have implicitly reduced trust in the AI and affected the outcomes of confidence levels.
Additionally, the presentation model of AI-assistance was in the form of graphical representation overlaid on the X-rays (Figure 1). This mode of presentation is therefore an explainable system versus the typical black-box decision of traditional AI models, which can improve confidence and thus acceptance in AI. 20 The results of this study did not support that the presentation of AI in its current form changed confidence levels. It might be possible that the experience level of the clinician affects confidence in AI despite an explainable system. The environment was also limited to 1 survey without user feedback in a clinical setting that through continued use might build additional trust and confidence in the AI system. Additionally, the visual user interface (UI) might affect trust/confidence, and future studies to elucidate more beneficial presentation styles might be warranted.
Perceptions of AI
Regardless of interaction with AI or lack of interaction with AI participants in this study generally felt strongly that software can improve clinical confidence in identifying FI. Dental students are exposed to multiple digital technologies that already use AI. Although most might not be aware of the AI within the technologies, the students surveyed are willing to accept help from an AI-based system. The accuracy, efficiency, and confidence measured in novice clinicians provide insight into the human-AI interactions in a clinical detection task. However, limitations exist in this single study with limited number of participants. No attempt was made to identify or discriminate biases. The survey was remotely administered and thus the experimental environment was not standardized. Additionally, a limited amount of diagnostic questions were asked of participants. A larger study with a standardized environment testing a different clinical task would improve the generalizability of the results found in this study. Future directions might also attempt to discern if interactions are affected by provider knowledge of varying degrees of AI accuracy and its effect of human-AI trust. In addition, these findings were in the context of novice clinicians and it is possible that experienced clinicians might not show the same effects.
CONCLUSION
Though the integration of AI into healthcare has great potential, caution must be taken with its use. Given the experience level of a clinician and a more challenging diagnostic choice, presenting addi-tional information will likely decrease the efficiency and increase potential for biases. Diagnostic inaccuracies can lead to incorrect treatment as well as delay correct diagnosis and treatment, which can have serious consequences for patients. 21 When implementing AI systems for diagnostic decision-making, not only is the visual presentation of the UI important but the psychological environment associated with the AI-clinician relationship is important to consider as well.
AUTHOR CONTRIBUTIONS
All authors meet the criteria for authorship based on the ICMJE guidelines.
ACKNOWLEDGMENTS
We appreciate Mr. Chih-Yu Hsu for constructing the AI model and Dr. Nathaniel Holland for conducting the statistical analyses for this project.
CONFLICT OF INTEREST STATEMENT
None declared.
DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author. Note: Mean 6 SD Table 3. Pre-and postsurvey confidence in ability to radiographically identify furcation involvement | 4,587.8 | 2022-04-06T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Belief Propagation Decoder With Multiple Bit-Flipping Sets and Stopping Criteria for Polar Codes
Compared with successive cancellation list (SCL) decoders, belief propagation (BP)-based decoders suffer performance loss in middle- and high-signal-to-noise ratio (SNR) regions. By analyzing the behavior of the incorrect decoding results of the bit-flipping BP decoder with a critical set of order $\omega $ (BFBP-CS $^\omega $ ), we found that undetected errors mainly contribute to the error floor. Based on this observation, we proposed a belief propagation decoder with multiple bit-flipping sets (BFSs) and stopping criteria (BP-MF-MC) in this work. We use multiple stopping criteria to identify undetected errors and a small BFS to find an additional estimated codeword given by the bit-flipping BP (BFBP) function. For uncorrected errors, we use multiple BFSs to find estimated codewords with the BFBP function. Furthermore, we propose a method to dynamically generate a BFS based on the submatrix check. This method can remove unnecessary bit-flipping positions and increase the order of the critical set. Then, the best codeword is selected from all estimated codewords according to the maximum likelihood principle. Numerical results show that BP-MF-MC performs similarly to the cyclic redundancy check-aided SCL (CA-SCL) decoder with list size 16 and is slightly worse than CA-SCL with list size 32.
I. INTRODUCTION
Polar codes are well known for their ability to achieve Shannon capacity and their low encoding and decoding complexity [1]. The successive-cancellation (SC) decoding algorithm proposed by Arıkan [1] is one of the common decoding methods for polar codes. To improve the performance of SC, Tal and Vardy [2] introduced a successive cancellation list (SCL) decoding algorithm whose performance is very close to that of maximum-likelihood decoding.
On the other hand, the belief propagation (BP) algorithm [1], [3] is theoretically more parallel than SC-based algorithms. However, the BP algorithm often has a much higher computational complexity than SC. To lower the complexity, an early-stopping criterion [4]- [6] was proposed to reduce the number of iterations of the BP algorithm. The authors of [7] proposed a subfactor graph-freezing technique to reduce the average number of computations as well as the average number of iterations required by the BP algorithm. To reduce The associate editor coordinating the review of this manuscript and approving it for publication was Oussama Habachi . the required memory, a stage-combined BP decoding algorithm [8] was introduced to reduce the decoding latency and memory requirement. Moreover, a generalized BP algorithm [9], [10] based on these modifying factor graphs was proposed to further improve the performance of the BP algorithm.
Additionally, the error-correction performance of the BP algorithm is worse than that of the SCL decoder. Several BP-based algorithms have been proposed, and they outperform the conventional BP algorithm. The BP list decoder (BPL) was proposed based on a permuted factor graph [11], [12]. The performance of the BPL decoder is close to that of the SCL decoder, whereas it is inferior to that of the cyclic redundancy check (CRC)-aided SCL (CA-SCL) decoder. In [13], the proposed BP bit-strengthening (BPBS) decoder performs similarly to the SCL decoder in mediumand high-SNR regions. However, BPL and BPBS benefit only slightly or not at all from an outer CRC code.
In [14], a parity-check matrix was introduced to improve the performance of the BP decoder. In [15], CRC-polar BP (CPBP) and neural CPBP (NCPBP) decoders, which achieve significant error-correction performance improvement compared to conventional CRC-aided BP decoders, were proposed. Inspired by the SC-based bit-flipping decoders, a bit-flipping BP decoder using a critical set of order ω (BFBP-CS ω ) was proposed in [16]. BFBP-CS ω uses a CRC check to detect errors and exhaustively sets the a priori knowledge of the unreliable bits to find a codeword passed the CRC check. However, the BFBP-CS ω decoder exhibits an error floor in high signal-to-noise ratio (SNR) regions.
In this paper, we aim to lower the error floor of the bit-flipping BP (BFBP) decoder and design a BP decoder that has a similar performance to that of CA-SCL decoders. The main contributions of this paper are summarized as follows: 1) In this work, we propose a belief propagation decoder with multiple bit-flipping sets (BFSs) and stopping criteria (BP-MF-MC), which is a generalization of BFBP-CS ω . We analyze the behavior of the incorrect decoding results of the BFBP-CS ω decoder. We find that undetected errors contribute to the error floor in high-SNR regions. Therefore, we use multiple stopping criteria to identify undetected errors and multiple BFSs to find more estimated codewords with the proposed decoder. Finally, the best codeword is selected from the estimated codewords according to the maximum likelihood principle. 2) We present many types of BFSs in this work. We propose a method to dynamically generate a BFS based on the submatrix check. This method can use the submatrix check to remove unnecessary bit-flipping positions. Thus, it can increase the order of the critical set. It is shown in the numerical results that the configurations with a dynamically generated BFS can improve performance. The rest of this paper is organized as follows: In Section II, we introduce the BP and BFBP-CS ω algorithms. Section III analyzes the decoding error properties of BFBP-CS ω . Section IV proposes the BP-MF-MC algorithm. In Section V, we present the numerical results of BP-MF-MC for the polar codes (2048,1024+24), (1024,256+24), and (1024,768+24). We also evaluate the computational complexity of the BP-MF-MC in this section. Concluding remarks are given in Section VI.
A. NOTATION
In this work, we use letters W in standard font to denote scalars and boldface letters W to denote vectors and matrices.
B. POLAR CODES
Polar codes are linear block codes based on the phenomenon of channel polarization, in which individual channels are recursively combined and split such that their mutual information tends toward either 1 or 0. In other words, some of these channels become completely noise-free, while the others become completely noisy. Furthermore, the fraction of noiseless channels tends toward the capacity of the underlying binary symmetric channels [1]. Polar codes are specified by a generator tensor G N , where N = 2 n is the code length. A polar code (N , r) can be generated in two steps. Let A and A c be reliable and unreliable positions for information and frozen bits. First, an N -bit message u is constructed by assigning r information bits to A and 0 to A c . Then, the N -bit u is multiplied with the generator G N to generate an N -bit transmitted codeword x = uG N .
C. BELIEF PROPAGATION DECODER
The process of polar coding (encoding and decoding) can be represented by a factor graph [1]. Figure 1 shows the factor graph of polar codes with N = 8, which is divided into n = log 2 N stages. Each stage has N /2 processing elements (PEs), and each PE has two input and two output variable nodes. The BP decoding of polar codes is the process of passing the log-likelihood ratio (LLR) iteratively through the factor graph. Node (i, j) is associated with two types of LLR: left-to-right R i,j and right-to-left L i,j , where i is the row index on the factor graph at stage j. Each PE computes the R and L messages as follows: where g(x, y) = 0.9375 · sign(x)sign(y) min(|x|, |y|). The messages R and L are initialized by where llr i is the LLR of the i-th received bit. In this work, BP decoding uses the CRC check as the early stopping criterion to reduce the number of iterations. VOLUME 8, 2020 Algorithm 1 BP Algorithm Using CS ω 1: Input: llr N 1 , A, CS ω 2: Output:û N 1 3: Initialize L and R by using (5) and (6) 4:û N 1 ← BP(llr N 1 , A, L, R) 5: ifû N 1 does not satisfy CRC then 6: for all (j ω Initialize R by (5) 8: for l = 1 to ω do 9: ifû N 1 satisfy CRC then 13: returnû N 1 14: end if 15: end for 16: The critical set (CS) contains the bit positions that tend to be unreliable, and its construction is given in [16]. BFBP-CS ω is summarized in Algorithm 1. First, the conventional BP algorithm is performed. If the BP decoder fails the CRC test, BFBP-CS ω exhaustively enumerates all the possible values b ω Finally, BFBP-CS ω terminates the iteration once the estimated codeword passes the CRC check.
III. ANALYSIS OF THE PERFORMANCE OF BFBP-CS ω
In the first part of this section, we present the simulation conditions for this work and analyze the performance limits of BFBP-CS ω using oracle-assisted BP-CS ω (OABP-CS ω ). We find that OABP-CS ω has an error floor in the high-SNR region, and it is difficult to further improve the performance simply by increasing ω. In the second part, we analyze the error types of OABP-CS ω . We find that undetected errors mainly contributed to the error floor in the high-SNR region.
A. SIMULATION CONDITION
In this work, the modulation format is binary phase-shift keying, and the channel noise model is additive white Gaussian noise. The maximum number of iterations for the BP decoders is 100. The CRC checks used in this work have lengths of 11, 16 and 24 bits. Their polynomials [17] are x 11 + x 10 +x 9 +x 5 +1x 16 +x 12 +x 5 +1 and x 24 +x 23 +x 6 +x 5 +x+1, respectively.
B. FER LOWER BOUNDS OF THE BFBP-CS ω
According to [16], the performance and complexity of BFBP-CS ω increase with ω. If ω ≥ 6, the size of CS ω , which is approximately 2 ω × |CS|, is too large to simulate its performance. Therefore, we use OABP-CS ω to predict the theoretical optimal performance of BFBP-CS ω , which serves as a lower bound on the frame error (FER) results.
OABP-CS ω is almost the same as BFBP-CS ω except that line 9 of Algorithm 1 is replaced by where j l is the l-th position of CS ω and u j l is the j l -th message bit.
The performance of the OABP-CS ω decoder for the polar code (1024,768+24) is shown in Figure 2. The code is constructed from the reliability table of [17] and has 24 CRC bits. As ω increases, the FER of OABP-CS ω decreases. However, the performance difference between the two adjacent curves decreases as ω increases. Compared with CA-SCL, OABP-CS ω has the error floor at a high SNR. For example, OABP-CS 9 performs similarly to or better than CA-SCL with list size 8 (CA-SCL8) in a range of [3,4]dB. However, the performance of OABP-CS 9 is worse than that of CA-SCL8 if SNR>4 dB.
C. ANALYSIS OF THE ERRORS OF OABP-CS ω
We divide the errors of OABP-CS ω into three types. The first type, error type I, includes all undetected errors that satisfy the CRC check. The second type, error type II, passes the CRC check during conventional BP decoding. The third type, error type III, is an error that is a decoding failure of OABP-CS ω .
The percentages of the three error types are shown in Figure 3. As shown in the figure, the percentage of type III decreases, whereas the percentages of types I and II increase, with the SNR. The percentage of type II is almost the same as that of type I. Moreover, the percentage of type II does not decrease as ω increases. Therefore, we can conclude that error type II is the dominant factor accounting for the error floor of OABP-CS ω and BFBP-CS ω in the high-SNR region. Furthermore, the performance of OABP-CS ω in the high-SNR region cannot be further improved by simply increasing ω due to error type II. For the low-SNR region, error type III is the main factor in the failure of the decoder. (5) and (6) 14: for all i ∈ do 15: U i ← BFBP(llr N 1 , A, i ) 16: end for 17: end if 18: Select the best codewordû N 1 in U according to the maximum likelihood principle using (8)
IV. BELIEF PROPAGATION DECODERS WITH MULTIPLE BIT-FLIPPING SETS AND STOPPING CRITERIA
Based on the above observations, we generalize BFBP-CS ω and propose the BP-MF-MC decoder. We use multiple BFSs and stopping criteria to lower the rate of error types II and III simultaneously. The scheme of BP-MF-MC is given in Algorithms 2 and 3.
Let S be an independent stopping criterion other than the CRC check, such as the G-matrix [5], the worst of information bits (WIB) [4] or the best frozen bits (BFB) [6]. If the estimated codewordû N 1 satisfies S, it is more likely a correct codeword and is output by the algorithm. Let = { 1 , . . . , n φ } be a sequence containing n φ BFSs. The BFSs in can be constructed by various methods, Initialize L and R by using (5) and (6) 6: for l = 1 to ω do 7: The scheme of BFBP(·) is shown in Algorithm 3. It is a generalized BFBP-CS ω decoder over the flipping sets in . The function BFBP(·) differs from the BFBP-CS ω decoder in two ways. The first is in line 5 of Algorithm 3. The BFBP decoder reinitializes both messages R and L at the beginning of its flipping process. However, BFBP-CS ω only re-initializes R in line 7 of Algorithm 1. Therefore, BPBP(·) can be implemented in parallel. Second, the BFSs in do not require all the bit-flipping positions j w 1 to be of the same length. Thus, can consist of several disjoint BFSs. Let = { 1 , . . . , n ψ } be a sequence containing n ψ BFSs. Ifû N 1 does not satisfy the CRC check, BFBP(·) is also used to find the decoding results U i for each i .
On line 18 of Algorithm 2, the best codewordû N 1 is selected in the set U = {U 1 ; . . . ; U n u } according to the maximum likelihood principle [11] Of course, the sequences and should be carefully chosen to balance the performance and complexity of BP-MF-MC.
To efficiently find the error bit positions, we propose a method D(λ th , F, A c ) to dynamically generate the BFS with a submatrix check. This method is inspired by the BP bit-strengthening method [13]. The scheme of D(λ th , F, A c ) is shown in Algorithm 4. Let G 2 m be a generator matrix of size 2 m ×2 m . Letû k be a row vector of length 2 m , which is the k-th subfactor graph ofû N 1 . Let λ th be the last stage that performs the submatrix check. Letx k be the row vector corresponding toû k , which is given bŷ for k = 1 : 2 n−m do 6: ifû k G 2 m =x k then 7: end for 10: end for 11: generate the critical set CS using A f 12: ω ← floor(log 2 (F/|CS|)) 13: generate the critical set CS ω using A f where R k and L k are the k-th row vectors of length 2 m at the m-th stage.
First, the set of frozen bit positions A f is initialized by A c . If the submatrix check in line 7 of Algorithm 4 is satisfied, we take these information bits to be correct and update A f . Then, we use the updated A f to generate the CS with the algorithm given in [16]. Finally, Algorithm 4 outputs CS ω using A f . In this way, we increase the order ω by eliminating the unnecessary bit-flipping positions. According to the performance of OABP-CS ω , this procedure is helpful in decreasing the number of errors of type III. Remarks:
V. SIMULATION RESULTS
In this section, we introduce several configurations for BP-MF-MC and evaluate the FER performance for three polar codes (1024, 256+24), (1024, 768+24) and (2048, 1024+24). The code (2048, 1024+24) is designed by the Gaussian approximation (GA) method [18]. The design SNR is 2.5 dB, which is optimized according to the simulation results of CA-SCL decoders with various list sizes over a wide range. The polar codes (1024,256+24) and (1024,768+24) are designed by the reliability table given in [17].
A. CONFIGURATIONS
According to Algorithm 2, there are two sequences used by the function BFBP(·) of Algorithm 3. The sequence is used on line 8 to provide an additional estimated codeword that passes the CRC check and lowers the number of errors of type II. Thus, we expect that is as small as possible. In contrast, is used to find a codeword that can pass the CRC check. We require to be as large as possible because otherwise, the conventional BP fails. In this work, and are mainly constructed by five types of BFSs. The first two are the critical sets CS 1 and CS 3 . According to the BPBS decoder, bit strengthening is also helpful to improve the performance of the BP decoder. Thus, the size of CS 1 for BP-MF-MC is twice the number of critical bit-flipping positions, which is given by The third A \ CS 3 is the information set excluding CS 3 . Interestingly, we find that the frozen bits A c can be used as the flipping set. The last set is the BFS dynamically generated by Algorithm 4. For the polar codes (2048,1024+24) and (1024,768+24), λ th is 9 and F = 3000, while λ th = 6 for the polar code (1024,256+24).
In this work, the configuration of BP-MF-MC refers to the way and are constructed. In the following, we use the tuple (| |, | \ D(λ th , F, A c )|, F) to denote a configuration of and , where |·| is the number of elements of a sequence. The configurations used in this work for BP-MF-MC are summarized in Table 1.
Finally, there are two stopping criteria used in this work. The first is the CRC check used in BFBP (·) and on line 5 of Algorithm 2. The second S is the G-matrix used on line 6 of Algorithm 2.
However, the configurations (0, 2760, ∞) and (0, 4412, ∞) perform similarly to BFBP-CS 3 with increasing E b /N 0 , especially at E b /N 0 = 3 dB. The main reason is that these two configurations cannot reduce the number of errors of type II if | | = 0. Therefore, configurations with | | > 0 can lower the error floor at high SNRs. The figure shows that there is a performance improvement of (440,1760,∞) over BFBF- The performance of the configurations, which are (440, 4412, ∞), (440,4412,9), (1760,4412,9), and (0, 3412, 9), is shown in Figure 5. The figure shows that (440,4412,9) and (1760,4412,9) have the best performance among all the configurations of BP-MF-MC. These two configurations outperform CA-SCL with list size 16 (CA-SCL16) and are even close to CA-SCL with list size 32 (CA-SCL32) when E b /N 0 > 1.75 dB and 24 CRC bits are used. Finally, the performance of (440,4412,9) and (1760,4412,9) does not show any error floor, and they are similar to each other. Thus, we can select the smaller to reduce the complexity of BP-MF-MC. Therefore, we choose = {CS 1 } in the following numerical examples.
C. COMPLEXITY
The average number of iterations for various decoders is shown in Figure 8. The target code is (2048,1024+24). It is observed that the configuration (1760, 4412, 9) uses the highest average number of iterations among all the configurations. The configurations (440,4412,9), (440, 4412, ∞) and (440, 1760, ∞) use more iterations than BFBP-CS 3 while using fewer iterations than (1760,4412,9). These three configurations overlap each other in high-SNR regions. (0,3412,9) and (0, 4412, ∞) converge to BFBP-CS 3 and require slightly more iterations than the conventional BP decoder. Furthermore, the complexity of (0, 3412, 9) is much higher than that of (0, 4412, ∞) if E b /N 0 ≤ 2.5 dB. Therefore, although the dynamic flipping set can improve performance, it increases computational complexity, which may decrease the throughput of hardware implementation due to the large resource requirement.
Next, we perform complexity comparisons between the proposed BP-MF-MC and conventional decoders based on the simulation results. LetĪ be the average number of iterations for BP-based decoders, such as BP-MF-MC and BFBP-CS ω . Here, for the BP decoder, one computation by a PE given in (1)-(4) is taken to have unit complexity. Thus, the computational complexity of the BP-based decoder is calculated using 2NnĪ . The complexity of the CA-SCL decoder is taken from [16]. The comparison results are given in Table 2. Please note that the complexity for the configurations given in Table 2 is the average number of PEs when implementing all the decoders in software. The table shows that the complexity of BP-MF-MC with the configuration (440,4412,9) decreases with SNR. At E b /N 0 = 3dB, the complexity of (440,4412,9) is approximately two times as much as that of CA-SCL32 or three times as much as that of BFBP-CS 3 . The complexity of (0,3412,9) is close to that of CA-SCL32 or BFBP-CS 3 if E b /N 0 ≥ 2.75 dB. Furthermore, the average complexity of the configurations given in Table 2 is much higher than that of the CA-SCL decoders if E b /N 0 ≤ 2.5 dB.
VI. CONCLUSION
In this paper, we propose the BP-MF-MC algorithm for polar codes and present the numerical results for two polar codes. The simulation results show that BP-MF-MC performs similarly to the CA-SCL16 algorithm. Although the complexity of the proposed algorithm can approach that of the CA-SCL32 decoder in high-SNR regions, it is important for future studies to lower the complexity of BP-MF-MC in low-SNR regions so that it can be used in practice. | 5,164.6 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Mobile-based system for cost-effective e-learning contents delivery in resource and bandwidth constrained learning environments
The advancement in Information and Communication Technologies (ICTs) has brought opportunities for the development of Smart Cities. The Smart City uses ICT to enhance performance and wellbeing, to reduce costs and resource consumption, and to engage more effectively and actively with its citizens. In particular, the education sector is adopting new ways of learning in Higher Education Institutions (HEIs) through e-learning systems. While these opportunities exist, e-learning content delivery and accessibility in third world countries like Tanzania is still a challenge due to resource and network constrained environments. The challenges include: high cost of bandwidth connection and usage; high dependency on the Internet; limited mobility and portability features; inaccessibility during the offline period and shortage of ICT facilities. In this paper, we investigate the use of mobile technology to sustainably support education and skills development particularly in developing countries. Specifically, we propose a Cost-effective Mobile Based Learning Content Delivery system for resource and network constrained environments. This system can be applied to cost-effectively broaden and support education in many cities around the world, which are approaching the 'Smart City' concept in their own way, even with less available technology infrastructure. Therefore, the proposed solution has the potential to reduce the cost of the bandwidth usage, and cut down the server workload and the Internet usage overhead by synchronizing learning contents from some remote server to a local database in the user’s device for offline use. It will also improve the quality of experience and participation of learners as well as facilitate mobility and portability in learning activities, which also supports the all-compassing learning experience in a Smart City.
Introduction
Higher Education Institutions (HEIs) require a cost-effective and efficient system for the delivery and accessibility of learning contents which favours personalized and innovative learning while minimizing the development and operation cost.However, it has been reported by Bakari, Mbwette, and Salaam (2010) that persistent internet connectivity in third world countries is still a major challenge for both public and private HEIs.
There are existing Learning Management Systems (LMS), such as Moodle, which are adopted by the majority of HEIs in developing countries.Moodle is a web-based LMS which highly depends on the Internet for high performance and reliability.Despite the great opportunity brought about by Moodle and other LMSs; learning content delivery and accessibility is still a challenge in developing countries due to the constrain in resources and bandwidth (Mtega, Bernard, & Dettu, 2013;Milovanović, 2010;Trifonova, Knapp, Ronchetti, & Gamper, 2004;Trifonova, 2006;Jayakumar, Manimaran, & Gopianand, 2013;Swarts & Wachira, 2010).These challenges include the high cost of the band width usage; the limited mobility and portability features; the un-accessibility of e-learning contents during the offline period; the server workloads and the Internet usage overhead.Therefore, there is a need for a technology to bridge these gaps.The adaptation of mobile technologies is growing at a rapid pace, bringing along a lot of opportunities that can enhance the learning contents delivery and accessibility for HEIs within resource and network constrained environments.The survey conducted in this study, found that the majority of HEIs in Tanzania own basic ICT infrastructures such as Local Area Network (LAN), Internet, computers, and mobile technology that form the basis for the establishment of e-learning.Furthermore, it was found that the majority of students owns more than one mobile device which can be used as a vehicle to facilitate the access to learning contents (Mahenge, Mwangoka, & Simba, 2014).
Previous studies (Momo, 2008;MLE, 2009) proposed Mobile Moodle (Momo) and Mobile Learning Engine (MLE).Both applications are based on J2ME, while the MLE project developed a client application and an additional web version to access Moodle courses from mobile browsers.However, the structure of the XML schema used in MLE caused problems, such as high memory requirement and delay in response time.Piguillem et al. (2012) proposed Moodbile; the android application that supports both online and offline accessibility by storing offline contents in the memory cache.However, synchronization functionality was not implemented.The proposed systems require continuous and reliable Internet connection during downloading contents to the memory cache as a result, they favour learning environments with sufficient resources and reliable Internet connectivity.Lujara (2008) proposed Compact Disk Read Only Memory (CD-ROM) for offline delivery; however this solution cannot accommodate the needs due to the rapid growth of the amount of information and the increased number of e-learning users.Furthermore, a client-side proxy architecture for supporting the offline use of learning contents and bracing approach for increasing the web server performance was suggested (Trifonova, 2006;Makker & Rathy, 2011).However, pre-fetching and caching contents to the local memory were done automatically by the system owing to filling the cache with documents without any prior knowledge of the user.For third world countries, taking Tanzania as an example; the extension of the learning content delivery to mobile computing devices and the implementation of cost-effective solutions for HEIs within resource and bandwidth constrained environments is still an open research topic (Tlhapane & Simelane, 2010).
In this paper, we explore opportunities brought by mobile technologies to find out a cost-effective solution for e-learning content delivery and accessibility for HEIs in resource constrained environments, which will serve as a basis for implementing the 'Smart City' concept in a developing country.Nam and Pardo (2011) argued that a Smart City is supported by strategic principles aligning to three main dimensions including technology, people, and institutions.Further key characteristics of the Smart City point to the integration of infrastructures and technology-mediated services, social learning for strengthening human infrastructure, and governance for institutional improvement and citizen engagement (Andone, Holotescus, & Grosseck, 2014).
We demonstrate a Cost-effective Mobile Based Learning Content Delivery system for HEIs referring to the constraints in resource and network bandwidth.The developed system is a significant contribution toward enhancing the availability and accessibility of learning resources in a reliable, cost-effective and timely manner.In the end, the proposed system is expected to improve the quality and participation of the learners as well as facilitate mobility and portability in learning activities.
Methodology
This development was conducted at HEIs in Tanzania.The system design requirements were gathered in three HEIs where different techniques and methods for requirement gathering were employed, including interview, rapid prototyping and review of empirical literatures were used.
Software development life cycle (SDLC)
A software development process is often described in terms of a set of activities needed to transform user requirements into a software system.At the highest level of abstraction, a development process is sometimes called development life cycle, one of the model used is an evolutionary development model which includes exploratory development and rapid prototyping.It has been reported by Pratikshya (2013) that, the major advantage of the evolutionary development model is its flexibility that is; the ability to accommodate changes.This is due to facts that customer needs are dynamic, they keep changing, thus evolutionary development model can accommodate changes by adding features to the system as proposed by users as the needs arises.Furthermore, due to time limitation, this study will develop the initial version of software which can later be evolved to higher versions by other researchers through adding new features.The evolutionary development model will best support evolvement of higher versions as shown in Fig. 1.
Development
Intermediate versions
Specification Initial version
Outline description
Tools and technologies used
The proposed system consists of two main parts; the backend and the front-end.The backend consists of a database management system and a web server.This part of the system enables system administrators and other users to get access to the collected learning contents.The implementation of the backend uses open source relational database management system (MySQL) for main storage of the learning contents and server side-scripting language (PHP).The front-end part is the mobile application running on android mobile operating systems deployed in user's mobile computing devices (smartphone, PDA).This part of the system is used for accessing the learning contents.SQLite database is used for the persistent temporary storage of the learning contents that can be accessed during the offline period.The HTTP Protocol is used to manage the communication between client and server.The choice of these technologies was based on fact that, the system developed using this technology is easy to use (with a user-friendly interface), easy to manage and maintain as well as widely used and available.
Design requirements
Portable computing/communication devices are essential for mobile based e-learning content delivery and accessibility in HEIs.The most significant feature in the mobile environment is the mobility itself.It assists users to be in connection while being outside the reach of conventional communication spaces.In the context of learning environments, mobility can be conceptualized in terms of ability to access the learning contents anytime, anywhere without the restriction of time and space.The significant design requirement for the proposed system is the synchronization of the learning content from some remote server to a local database (mobile database) to fulfil the following purposes: i.
Extend the learning content delivery and accessibility to mobile computing devices; ii.Synchronize the learning contents to the mobile devices; iii.Reduce the cost of the bandwidth usage; iv.Enable the offline access to learning content; v. Enable the ubiquitous access to learning content anywhere, anytime without the restriction of time and space; vi.Reduce the internet usage overhead and the server workload; vii.Improve the quality and participation of the learners in learning activities.
System architecture
The proposed system architecture consists of two main parts; the backend and the frontend.The backend consists of the data access layer and data storage layer, and the frontend consists of the presentation layer as shown in Fig. 2. Mobile application interface: The front-end part is the mobile application running on Android mobile operating systems deployed on the user's mobile computing device.The mobile application interface is used for accessing the learning contents; ii.
Persistent temporary storage: SQLite database is used for storing the synchronized learning contents for offline use; iii.
HTTP connection Manager: The purpose of a HTTP connection manager is to serve as a factory for new HTTP connections, to manage the life cycle of persistent connections and to synchronize the access to persistent connections making sure that only one thread can have access to a connection at a time; iv.
Synchronize and cache: It synchronizes the learning content from some remote servers and stores locally on mobile devices for offline use; v. Web interface: It serves the user management, uploading and updating the learning contents; b) Data access layer: The purpose of the data access layer is to control the application's functionality by performing detailed processing.This layer coordinates the application processes, commands and makes logical decisions.It moves and processes the data between the presentation layer and the data layer.Thus, the data access layer is the implementation of a web server, which can be done by the apache server to serve the purpose of content management.
c) Data storage layer: This layer consists of database servers that form the main storage.This layer keeps the data neutral and independent from the application servers or the business logic.
When the proposed system; that is Mobile Based Learning Content Delivery System (Mobile-LCDS) is installed in users' access devices; it will provide the mobile user with a user interface to interact with the system, and a persistent storage for the temporary storage of synchronized contents.When a user makes a request; the Application Programming Interface (API) accepts the request by GET or POST methods; then it interacts with PHP classes to get data from a database or store data into the database; and finally it returns the output to the requesting user/device in a JSON/XML format, which is human-readable.The user can synchronize contents from the remote database when the Internet is available; the persistent storage stores synchronized content locally in a mobile device for offline use.
Synchronization and caching
This part discusses an alternative approach (synchronize and cache) for learning content delivery and accessibility without highly depending on the Internet.The advancement in mobile applications provides an opportunity of being able to work offline.Synchronization for data-driven applications means that a subset of the application data can be stored locally in the access device and the data synchronization mechanism is implemented to keep the local database and the server data (main storage) in a synchronized state.Cache means temporary storage of synchronized data.The goal is to find an efficient and cost-effective approach for learning content delivery in HEIs with resource and network constrained environments.
Without the synchronization and caching policy the user accesses content from the main storage and will need a continuous Internet connection, which is costly in terms of bandwidth connection and usage.Also, it takes long time to access the content from the main storage due to fact that it highly depends on the Internet; as a result it is affected by the following factors: i. Web server performance: Different web servers types used for hosting content, depending on its technical specifications, could have different processing power.For example, web servers configured with high-powered central processing units and huge amounts of memory would have superior processing capability, compared to those that are insignificantly configured.Also servers could still be overloaded when the number of concurrent accesses exceeds the webserver capability; ii.Location of web servers: Typically, web servers are hosted on the backbone of networks to facilitate the delivery of content.On the occasion where the web server is hosted on a remote site (for security/management purposes), a leased line connection is required for a network service provider.In this case, the provided bandwidth of the leased line becomes the bottleneck if users are demanding more bandwidth than available capacity; iii.Firewalls and security: Where security features are installed in a network, the investigation and filtering of packets with every additional layer of firewall may decrease the effective throughput of the system; iv.Internet speed: The Internet speed can vary depending on the differing expectations of speed of download and access of various end-users hence causing some delay in contents delivery; The majority of HEIs in third world countries like Tanzania face the challenge of resource and network bandwidth constraints.As a result the delivery and accessibility of the learning contents using Internet based learning systems become a challenge.Contents synchronization and caching is an alternative approach for content delivery and accessibility without heavily depending on the Internet connection.For the contents that have been synchronized and stored locally in an access device for future use, every time a user needs such contents, is able to access it offline from the temporary storage.The storage capacity of the most mobile computing devices is large enough to store huge amount of data.It has been reported by Randell (2013) that, Apple's reasonably priced iPhone, the 5c, is the most generous of the 16GB phones recently tested, giving the user 12.6GB of memory (79% of the total space) unused space.Also, Google's new Nexus 5, which runs on the Android operating system like the S4, offers relatively free 12.28GB (77%) of the usable space, the iPhone 5s provides 12.2GB (76%) of the usable storage, The Samsung Galaxy S4 has just 8.56 GB, and others have reasonable memory space that can allow storage large amount of data.This is a promising indicative possibility for storing huge amount of data locally in mobile computing devices.This study employs the opportunity brought by the Android built-in SQLite database for temporary storage of synchronized contents.However, the memory size of the phone will limit how much data can be retrieved from a query.
With synchronization and caching policy the proposed system Mobile-LCDS would support the offline accessibility of learning content.When the Internet is available; the system will synchronize the subset of contents stored in the permanent storage or the remote server to a local database where they can be used offline as illustrated in Fig. 3. Therefore the proposed system reduces the access time since there is no delay due to the independence of the Internet connection; cuts down the cost of bandwidth connection and usage: alleviates the server workload and the Internet usage overhead; and improves the quality of experience and participation of the learners in learning activities.
Use case modelling
A use case model shows a view of the system from the user perspective, thus describing what a system does without describing how the system does it.A use case provides developers with a view of what the users want (Kendall & Kendall, 2011).It describes a function provided by the system that yields a visible result to the actors.An actor describes an entity that interacts with the system.The actor can be either a human or a computer interaction.The use case diagram model was preferred in this study because it demonstrates the user's view of the system.The model gathers design requirements of the system and shows the interaction between the actors (users of the system) and the system itself.The system consists of three actors; Student, Instructor and the System administrator with the role of each actor shown in Fig. 4.
Mobile-LCDS data flow sequence
A data flow diagram (DFDs) is a structured analysis technique used to put together a graphical representation of the data processes that a Mobile-LCDS comprises.The DFD level-0 represents the system's major processes within the context process, the data flows and the data stores at a high level of detail as explained in Fig. 5.It is the decomposition of the context diagram.The major processes in the DFD level-0 represent the major functions of the Mobile-LCDS.The details of Mobile-LCDS DFD-level 0 descriptions are shown in Table 1.
Interface and different components of the system
Fig. 6 presents interface and system components for the mobile client application for users with preferences and intentions of synchronizing the required contents for working offline in the future.The application offers sync options to the user based on user needs and preferences.The system allows a user to synchronize learning content on a semester basis, yearly basis or synchronize all the contents -depending on the device's storage capacity.The sync options are significant for allowing the user to synchronize only the required contents; as a result this avoids filling the local database (cache) with unnecessary contents.Synchronizing the learning contents locally in mobile devices has various advantages, including: reducing the cost of the bandwidth connection and usage; ability to access the learning contents anywhere, anytime without the restriction of time and space; reduction of the Internet usage overhead and the server workload; and speedy and smooth access to the learning content due to the fact that all the required contents will be available in the mobile device -thus the user does not need to re-connect to the Internet.
A student forum is an important feature to improve the collaboration between students and the students' participation in learning activities.A student forum is implemented in order to improve the quality of experience and participation of the learners in learning activities; encourage collaboration among students and instructors using messages and discussion forum regardless of their physical location.It helps the students interact and share information like assignments resources, problems faced in the field, tutorials and innovation ideas.View other students posts to the forum; ii.Add his/her own post to the forum; iii.Refresh to get recently added posts to the forum.
The contents synchronized can be cleared out to free memory space in order to accommodate more data.The proposed system also includes web-interface components to be used by system administrators for managing the users of the system and the instructors for uploading and updating the learning contents.
The system admin can perform the following actions after login into the system: i.
Create a new instructor profile; ii.View, edit and delete instructors; iii.Edit course details; iv.Change password; v. Logout.
The instructor can perform the following actions after login into the system: i.
Insert course details; ii.View students' feedback; iii.Edit course details; iv.Change password; v. Logout.
Results and discussions
This section presents the results from the user acceptance functional requirements testing.Testing and validation of the proposed system was conducted in order to evaluate the system against the system design requirements specified and the results were summarized in Fig. 7. System testing is the process of evaluating a system or system component in order to verify that it satisfies requirements or to identify differences between expected and actual results (Glenford, Badgett, Todd, & Corey, 2004).System validation is the confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled (Torp, 2004).
The results indicate that the system developed satisfies the system design requirements specified as revealed in the results presented in Fig. 7(a) and 7(b).The majority of the user responses ranged from strongly agree to agree as evidenced by the results obtained.This study explored the opportunities brought by mobile technologies in order to improve learning content delivery and accessibility, especially in resource and bandwidth constrained environments.We propose a mobile based system for costeffective learning content delivery that bridges the gaps in existing e-learning systems by allowing significant bandwidth savings through the offline use of learning contents.The developed system can synchronize learning contents locally in mobile devices when the Internet connection is available.The synchronized contents can then be used offline; this reduces the cost of the bandwidth connection and usage, reduces the Internet usage overhead and the server workload, improves the e-learning system performance, the Internet access as well as the quality of experience and participation of the learners in learning activities.
Compared to existing Internet based learning system like Moodle, the presented system does not highly depend on the Internet connectivity.As a result it can benefit HEIs within resource and network constrained environments, which is the main problem the majority of HEIs in third world countries, particularly Tanzania, are facing.In the Mobile-LCDS, the learning experience is considered in terms of offline accessibility; mobility and portability; increased motivation in learning activities; enlarged collaboration through the integration of social networking tools; social interaction (for example discussion forum) as well as cost-effectiveness and ability to learn anytime, anywhere.It is expected that the learning outcome can improve individual skills, social skills as well as the acquisition of new skills, and the quality and participation of learners in learning activities will be enhanced.The proposed system is underpinned by the traditional learning environment and also supported by effective policies, rules and regulations, human resources, the Internet and technologies to facilitate the access to learning materials.On the other hand, the successful implementation of a blended mobile learning requires a strategic approach which should be owned by the university management, the academic staff and the students as well as other stakeholders.
Conclusion and future works
In this paper, we present a Cost-effective Mobile Based Learning Content Delivery solution for resource and network constrained learning environments.While owning and maintaining an ICT infrastructure for HEIs has many challenges including the cost of the hardware, software and human-ware; the growth of the mobile technology has brought a new opportunity toward its use for education purposes.The presented system can be applied to cost-effectively broaden and support education in many cities around the world, which are approaching the 'Smart City' concept in their own way, even with less available technology infrastructure.Therefore, the presented system has the potential to enhance the performance of existing e-learning systems, to reduce costs and the resource consumption, and to engage the learners more effectively and actively in learning activities.It will also cut down the server workload and the Internet usage overhead by synchronizing the learning contents from some remote server to a local database in the user's device for offline use.Eventually, the proposed system is significant for the enhancement of the wellbeing, the quality of experience and participation of the learners as well as for facilitating mobility and portability in learning activities.
The future work will address the implementation and the test of the system in real working environments.The system will be tested using a black box testing technique to deduce errors.From the mobile side, the usability evaluation will be conducted using different mobile computing devices.The System Usability Scale questionnaire, a recognized usability instrument, will be applied to measure the usability and the user satisfaction of the system.Additionally, future studies should consider measuring the extent level of performance improvements, conducting rigorous impact assessments and implementing further system enhancements in real scenarios.
Fig. 2 .
Fig. 2. Mobile-LCDS system architecture a) Presentation layer: This is the topmost layer of the application that provides the interface between the user and the system.It consists of the following building blocks: i.Mobile application interface: The front-end part is the mobile application running on Android mobile operating systems deployed on the user's mobile
Fig. 6 .
Fig. 6.Interface and system components A student forum allows the user to: i.View other students posts to the forum; ii.Add his/her own post to the forum; iii.Refresh to get recently added posts to the forum. | 5,731.6 | 2014-12-01T00:00:00.000 | [
"Computer Science"
] |
Effect of fuzziness in fuzzy rule-based classifiers defined by strong fuzzy partitions and winner-takes-all inference
We study the impact of fuzziness on the behavior of Fuzzy Rule-Based Classifiers (FRBCs) defined by trapezoidal fuzzy sets forming Strong Fuzzy Partitions. In particular, if an FRBC selects the class related to the rule with the highest activation (so-called Winner-Takes-All approach), then fuzziness, as quantified by the slope of the membership functions, has no impact in classifying data in regions of the input space where rules dominate. On the other hand, fuzziness affects the behaviour of the FRBC in regions where the confidence in classification is low. As a consequence, in the context of Explainable Artificial Intelligence, fuzziness is profitable in FRBCs only if classification is accompanied by an explanation of the confidence of the provided outputs.
Introduction
Explainable artificial intelligence (XAI) is a blooming research field propelled by the increasing demand of intelligent systems which should provide accurate answers to complex problems as well as some kind of human-oriented added values (explanations in choices, rationale and confidence in decisions, possible alternative strategies, and so on) (Hagras 2018). The field of application of XAI spans several areas, including Industry 4.0 scenarios (Lu 2019;Xu et al. 2018). From the methodological viewpoint, there are many ways to embody explainability in intelligent systems, from opening black-box models (Guidotti et al. 2018) to the development of specific methods (Biran and Cotton 2017 Fuzzy Logic systems have a great potential in the development of XAI solutions. In fact, they are able to express knowledge in a human-oriented fashion thanks to the adoption of a paradigm enabling the use of natural language terms (Computing with Words) (Zadeh 1999). Such a capability allows to provide the users with readable explanations of the embodied knowledge (represented in a perception-based fashion), and may guarantee also illustrative details concerning the inference process behind certain results (Zadeh 2008).
Nevertheless, attention must be paid to the semantics of the formal objects involved in knowledge representation and reasoning. This is to avoid that explanation is only illusory appearance which does not convey any piece of meaningful information. Therefore, when designing an XAI system, the quality of the underlying model should not be evaluated in terms of predictive accuracy only, but also taking into account the capability of generating meaningful information. This is not an easy task; yet it sheds light on new ways of analyzing existing approaches or devising new ones.
In this paper we focus on Fuzzy Rule-Based Classifiers (FRBCs), which are commonly praised for their ability of representing knowledge in an interpretable form (Gorzałczany and Rudziński 2017; Alonso et al. 2008). In essence, an FRBC is based on a knowledge base represented by a collection of rules. They are easy to be read and understood, provided that they have been designed by taking into account interpretability constraints (Alonso Moral et al. 2021). Given an input sample, a fuzzy inference mechanism is triggered, so that the FRBC returns in output a class label. Therefore, an FRBC usually behaves like many other classifiers-not necessarily based on fuzzy logic-but enables a clear interpretation of the knowledge base through the adoption of linguistic terms that reflect the imprecision of perceptionbased concepts.
In XAI, an obvious step forward consists in endowing FRBCs with the ability to explain the inferred class for a given object. In literature, powerful methods have been designed to give highly comprehensible explanations by using Natural Language Generation (NLG) techniques (Alonso et al. 2017). Usually, such explanation systems provide a symbolic description representing the reasoning process behind the automatic classification; however, the next question is: how much does the fuzziness of the involved linguistic terms affect such explanation? That is the core of this study.
In Sect. 2 we give an account of Strong Fuzzy Partitions (SFPs), which are widely used in FRBC design. Although FRBCs can be designed in different ways, we restrict our attention to SFPs with trapezoidal fuzzy sets because they enable the design of interpretable classification rules, thus explaining their widespread employment. Also, trapezoidal fuzzy sets can be easily designed so as to satisfy SFP constraints, while being flexible enough to adapt to data. The same Section introduces some properties of trapezoidal fuzzy sets that are instrumental for the arguments reported in Sect. 2.1, where FRBCs are formalized and the impact of fuzziness on the classification function is analyzed. The outcomes of an experimental session are reported in Sect. 3 to give a visual and quantitative account of the theoretical results on some synthetic data. Finally, a concluding section discusses the theoretical results from a methodological point of view.
Strong fuzzy partitions with trapezoidal fuzzy sets
Let X = [l, u] ⊂ R be a Universe of Discourse and let A 1 , A 2 , . . . , A n+1 be a sequence of normal and convex fuzzy sets defined on X . Such a sequence of fuzzy sets constitutes a Strong Fuzzy Partition (SFP) (Dubois et al. 1995;Loquin and Strauss 2006;Perfilieva 2006) provided that: 1 Eq.
(1) is often referred as Ruspini condition after (Ruspini 1969). The employment of SFPs is quite common in fuzzy modeling, especially when interpretability is a modeling requirement (Alonso Moral et al. 2021). The membership function of a trapezoidal fuzzy set is a piece-wise linear function constrained by four parameters a, b, c, d ∈ X . Provided that a ≤ b ≤ c ≤ d, a trapezoidal fuzzy set T [a, b, c, d] is defined for each x ∈ X as follows: It should be observed that a trapezoidal fuzzy set collapses to a triangular fuzzy set whenever b = c.
Trapezoidal fuzzy sets are convenient in FRBC design because they can be easily constrained in order to generate an SFP. To produce an SFP composed by trapezoidal fuzzy sets with membership functions . . , n + 1), the following conditions must hold: The intersection point between two contiguous fuzzy sets A i , A i+1 is called cut-point and it is denoted by t i (i = 1, . . . , n). 2 It is easy to verify that In this way, a sequence t 1 , t 2 , . . . , t n of cut-points is defined such that t i−1 ≤ t i , which can be extended by including t 0 = l and t n+1 = u. As a consequence, the 0.5-cut of a trapezoidal fuzzy set coincides with an interval identified by consecutive cutpoints: A key feature of a trapezoidal fuzzy set is the slope of the left and right boundaries, which are informally identified as the two areas of the domain where the membership degrees are neither 0 nor 1. Formally, the left and right boundaries are the open intervals ]a, b [ and ]c, d[ respectively. (One of the two boundaries can be empty for the leftmost and rightmost fuzzy sets; both boundaries are empty in the case of a singleton fuzzy set.) In case of non-empty boundaries, the corresponding slope of a trapezoidal fuzzy set T [a, b, c, d] is for the left boundary and s r = 1 c − d for the right boundary. An interesting property for the sake of our study is that the 0.5-cut of a trapezoidal fuzzy set is unaffected by the left and right slopes if the cut-points are fixed. In fact, according to the definition of trapezoidal fuzzy set (2), for any k that preserves (2), and changing a i+1 and b i+1 accordingly, the position of the cut-points t i does not change, therefore the 0.5-cut of A i as in (4) is unaffected.
In the following, we are going to dwell on this concept to analyze the impact of the boundaries on the inference mechanism of an FRBC. As a note of caution, in this work we assume that the cut-points are kept fixed because we are not interested in the ability of an FRBC in adapting to data, but rather on how the fuzziness of the trapezoidal fuzzy sets affects the classification function.
Classification via fuzzy rules
Let X 1 , X 2 , . . . , X m be a collection of Universes of Discourse, each defined as For each X j , an SFP A 1, j , . . . , A n j +1, j of trapezoidal fuzzy sets on X j is considered. Also, let C be a finite set of class labels.
A rule R is identified by the pair where the antecedent is a fuzzy set A defined over the Cartesian product X of the aforementioned Universes of Discourse and c ∈ C is the consequent of the rule. An FRBC is defined by a collection , with the constraint that any couple of rules cannot share the same antecedent, i.e. A k = A k for any k = k . It should be noted that the collection S is determined on the basis of the aforementioned SFP and it does not necessarily coincide with a grid partition (in this sense, the grid partition represents just a special case of the described arrangement). In general, data-driven methods generate a small set of rules based on available training data to avoid combinatorial rule explosion, therefore we can expect to count in S a fewer number of rules with respect to those related to the full combination of possibilities coming from a grid partition. As a consequence of our assumptions, the Ruffini condition is imposed while partitioning each single dimension, but it does not represent a constraint to be verified on the rule antecedents. The FRBC S is supposed to be applicable to a domain D such that In this way, we avoid the undesirable case of inputs for which no rules can be applied. The design process of an FRBC should ensure that no data fall outside the support of all rules. 3 Given an input x ∈ D, the inference function of the FRBC S is carried out as i.e., the class returned by the FRBC is the one related to the rule with highest membership degree for the given input (ties are solved arbitrarily.) This inference rule is also called "Winner-Takes-All" (Angelov and Xiaowei 2008).
Given a rule R = (A, c), we define the region of dominance of R as It is important to notice that the region of dominance of a rule is completely characterized by the cut-points of the underlying SFPs; in fact, where t i j , j = t i j , being t i j the i j -th cut-points of the SFP defined on X j for all i j > 0. The validity of (6) can be easily checked by observing that A(x) > 0.5 if and only if, for each j = 1, 2, . . . , m, A i j , j (x j ) > 0.5. This can be achieved when x j belongs to the 0.5-cut of A i j , j with the exclusion of the boundary points.
Thanks to the concept of region of dominance, it is possible to establish a useful result concerning the classification function of a FRBC: 5}, however chosen a dimension j in {1, 2, . . . , m} the following must hold: where i j = 1, 2, . . . , n j is the index of the fuzzy set in the j-th dimension, and k = 1, 2, . . . , r is the index of the rule; thus A k = A i 1 ,1,k × · · · × A i m ,m,k .
By definition of FRBC, any other rule in S (other than R k ) is characterized by an antecedent which is different from A k . Let R k be the rule in S such that A i j , j,k = A i j , j,k . The definition of SFP implies that As a consequence, the membership degree of x to R k is highest among all the rules and therefore f S (x) = c k .
Informally speaking, the previous lemma states that regions of dominance establish subsets of the input domain where only one rule dictates the class label. It is therefore possible to define a subset of the input domain, namely where the classification function is determined by one rule only for each input. What is more important for the purpose of our study is the following corollary:
Corollary 1
The set B is unaffected by the modifications applied on the slopes of the underlying fuzzy sets, provided that the corresponding cut-points are fixed.
The corollary follows by observing that B is included in the union of the regions of dominance of all the rules of an FRBC and, since each region of dominance is defined by the 0.5-cuts of the underlying fuzzy sets, which are not affected by the slopes of the fuzzy sets, then B is also unaffected by such slopes.
Based on these results, it is possible to affirm that an FRBC S behaves like a crisp classifier in the region B: Lemma 2 Given an FRBC S and a crisp classifier defined as follows: Notice that f does not depend on any of the parameters that define the trapezoidal fuzzy sets underlying the FRBC S, but only on the set of cut-points. Therefore, within B the classification function of an FRBC is completely unaffected by the slopes of all the trapezoidal fuzzy sets. In other words, it does not benefit from the involved fuzziness.
Outside B, however, the fuzziness of the fuzzy sets plays a role in determining the confidence of the decision carried out by the FRBC. For each x ∈ U = D \ B, by definition we have ∀k : A k (x) ≤ 0.5 If A k (x) = 0.5, then there may exist another rule R k , k = k , such that A k (x) = 0.5. This is verified if rules R k and R k share the same fuzzy sets in the antecedent with the exception of one dimension only, say j , where the fuzzy sets of the two rules intersect. In such a case, if c k = c k the classification ambiguity can be solved by an arbitrary choice (e.g., random).
If A k (x) < 0.5 for all k = 1, 2, . . . , r , the classification function can be better analyzed from the viewpoint of Possibility Theory (Dubois and Prade 2015). In fact, the inference schema of an FRBC is compatible with a possibilistic interpretation of the embodied fuzzy rules. Namely, each rule defines the possibility distribution that an object class is c k provided that the observed features belong to A k . We write to denote the possibility degree that the true class is c k given the input x according to the k-th rule. Rules with the same consequent class merge into a single possibility distribution defined by the union of all the antecedent. Formally, for all c ∈ C: When an input is given, the possibility degree is computed for all class labels, and the class label with the highest possibility degree is chosen. This operation can be justified by introducing the measure of necessity (or certainty): informally speaking, the certainty about a class label is evaluated in terms of impossibility of the other class labels. Formally: Thus, by selecting the class with the highest possibility degree, it is ensured that the certainty degree is also highest.
If x ∈ B it is easy to verify that the certainty degree of the selected class is higher than 0.5 (it is equal to 1 if x belongs to the core of the antecedent of a rule; this is a consequence of using SFPs). However, if x ∈ U the analysis deserves some notes of caution. By construction, for any c ∈ C the possibility degree is π c ≤ 0.5, therefore ν c ≥ 0.5. In fact, ν c is evaluated as the 1-complement of a quantity that is smaller than π c (therefore, smaller than 0.5). In other words, the certainty degree of any class label is higher than its possibility: this is an anomalous result since, in normal situations, certainty is never greater than possibility. 4 Furthermore, for a given class c it is possible to compute the certainty degree that another class is the true one. This can be simply reckoned by computing the impossibility that c is the true class, i.e. ν C\{c} = 1 − π c . In the case that x ∈ U, whatever class label c is selected, we obtain ν C\{c} ≥ 0.5, that is, for any possible class label emitted by the classifier, it is more certain that another class is a true one. Again, this is a situation that should be avoided in classification. It must be observed that this case does not happen if x ∈ B.
It is important to notice that the slopes of the trapezoidal fuzzy sets affect the volume of the set U. Ideally, this volume should be as small as possible, which can be achieved by crisp rules. On the other hand, by using triangular fuzzy sets as a special case of trapezoidal fuzzy sets, the volume of U is maximized.
Numerical results
We tested the impact of the theoretical results shown in the previous section on some synthetic datasets. A granulation method was applied to generate SFPs for each dimension related to the data at hand. Namely, for each dataset we used DC* to generate the cut-points and the initial SFP for each dimension . DC* is a specific algorithm designed to perform a double clustering process devoted to extract interpretable fuzzy granules of information from data and to express them in form of fuzzy classification rules. A first clustering of data is performed using a prototypeguided algorithm; then the derived prototypes are projected on each dimension and those projections are further clustered by exploiting the capabilities of the A* search algorithm.
We applied DC* to the bi-dimensional synthetic datasets depicted in Figs. 1, 2 which illustrate the cut-point configurations produced by DC* together with the data points. Table 1 sums up the main characteristics of the datasets. As can be observed, the datasets differ in the number of classes and datapoints. Also, the application of DC* produced cut-points that in some cases are in agreement with the data distribution, while in some other cases they appear to be less appropriate for discriminating among classes.
Once cut-points have been generated from data, SFPs have been designed in terms of trapezoidal fuzzy sets that are constrained to intersect in correspondence of cut-points. This has been accomplished in different ways: three heuristic methods called "Constant Slope" (CS), "Variable Fuzziness" (VF) and "Core Points" (CP) (Mencar et al. 2013) and two data-driven techniques based on Particle Swarm Optimization (PSO) . The two data-driven techniques, called "Leftmost Slope Constraint" (LSC) and "Constant Slope Constraint" (CSC) aims at optimizing the slopes of trapezoidal fuzzy sets in order to achieve the highest classification accuracy on the dataset.
In Table 2 we show the classification accuracy achieved for each dataset and for each method used for generating the SFPs. We observe a high stability of classification accuracy for any given dataset. The most relevant changes can be observed for datasets SD6 and SD7: the corresponding plots in Fig. 2 show the presence of granules (i.e. boxes bounded by cut-points, which correspond to rules if there are enough data) where data pertaining to different classes are mixed. Such cases are related to some DC* results which turned out to be less appropriate in terms of class discrimination; however, in those regions the classification function produces varied outputs while the slopes of the trapezoidal fuzzy sets are modified.
In Fig. 3 we provide a comparison of the SFPs obtained by applying the different methods put in action during the experimental session. The SD2 dataset has been chosen as an example and only one dimension has been considered for Fig. 1 The datasets SD1-SD4 adopted for the numerical simulation. The shadowed areas correspond to regions outside the support of all rule antecedents. Data points falling in these areas are classified randomly by DC*. Regions delimited by cut-points and without shadowed areas correspond to the regions of dominance of some rules. the sake of illustration. It can be observed how the classification results are highly stable through the application of the different methods, in spite of the differences achieved while designing the trapezoidal fuzzy sets involved in the SFPs.
Conclusions
We considered the classification carried out by a FRBC where fuzzy sets in antecedent are aggregated through the min operator and inference is determined by the Winner-Takes-All rule. The theoretical results-supported by the numerical experiments-show that the fuzziness of the linguistic terms involved in an FRBC, as quantified by the slope of the corresponding trapezoidal fuzzy sets, does not affect the classification function in the region where the classifier is more confident (that is, where the degree of certainty of the returned class is greater than 0.5). On the other hand, fuzziness affects the behaviour of an FRBC in a region of the input space where classification is problematic from the possibilistic point of view. However, if an FRBC learning algorithm is capable to capture the hidden relations among data, then most of them will fall in the regions of dominance of some rules, thus reducing the effects of classification outside such safe regions. All in all, the performance of an FRBC is predominantly determined by the position of the fuzzy sets in their domain, which is well captured by the collection of cut-points: when the cut-point positions are modified, the decision boundary of the classifier changes accordingly, thus affecting performance. On the other hand, by changing the fuzziness of the membership functions, the impact on the classifier is marginal, provided that SFPs are adopted and a class is selected by choosing the class label of the rule showing the highest membership degree. As an extreme case, which may correspond to the adoption of a grid partition strategy to split the input space, a fuzzy rule-based classifier may act exactly as a crisp classifier, thus implying that fuzziness does not play any role at all in the classification inference.
What is therefore the role of fuzzy sets in an FRBC? In a Machine Learning perspective, fuzzy sets are useful to finetune the decision boundaries in the presence of samples far from the clusters characterizing the regions of dominance of some rule. However, such a result appears to be marginal, since the performance of an FRBC can be improved by injecting more flexibility. For example, SFPs may be put aside, but some care must be taken to preserve interpretability. Moreover, fuzziness can play a relevant role by using different inference schemes, e.g. by allowing the inference of sets of classes, possibly associated with some confidence information.
In the context of XAI, however, the quality of the decision returned by an intelligent system is of utmost importance. In this sense, the fuzziness embodied in the FRBC gives valuable information about the confidence of classification. In particular, the classification function of an FRBC can be enriched by adding to the predicted class label a measure of confidence, i.e. the possibility and certainty degrees expressing the truthfulness of the inferred prediction according to the embodied knowledge base. (Eventually, this additional information can be rendered in legible form through some NLG process.) Finally, the membership degrees, rather than being arbitrarily determined, can be semantically grounded on some data properties (e.g., membership can be defined in terms of similarity with respect to a prototypical sample or interval). In such cases, it is possible to provide a faithful explanation of the reasons behind the decision carried out by an FRBC.
In conclusion, fuzziness may have a reduced role in the inference mechanism of an FRBC, while being relevant in terms of explanation of the produced results. Hence, we believe that these results convey an important message to the designers of fuzzy rule-based classifiers, since it represents a hint concerning the real utility of fuzziness in fuzzy modeling.
Funding Open access funding provided by Università degli Studi di Bari Aldo Moro within the CRUI-CARE Agreement. This work has been partially supported by Ministero dell'Istruzione, dell'Università e della Ricerca (MIUR) under Grant PON ARS01_00141 "CLOSE".
Data availability
The datasets and the code generated and/or analysed during the current study are available from the corresponding author on reasonable request.
Conflict of interest
The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/. | 5,951.4 | 2022-05-06T00:00:00.000 | [
"Computer Science"
] |
XGDAG: explainable gene–disease associations via graph neural networks
Abstract Motivation Disease gene prioritization consists in identifying genes that are likely to be involved in the mechanisms of a given disease, providing a ranking of such genes. Recently, the research community has used computational methods to uncover unknown gene–disease associations; these methods range from combinatorial to machine learning-based approaches. In particular, during the last years, approaches based on deep learning have provided superior results compared to more traditional ones. Yet, the problem with these is their inherent black-box structure, which prevents interpretability. Results We propose a new methodology for disease gene discovery, which leverages graph-structured data using graph neural networks (GNNs) along with an explainability phase for determining the ranking of candidate genes and understanding the model’s output. Our approach is based on a positive–unlabeled learning strategy, which outperforms existing gene discovery methods by exploiting GNNs in a non-black-box fashion. Our methodology is effective even in scenarios where a large number of associated genes need to be retrieved, in which gene prioritization methods often tend to lose their reliability. Availability and implementation The source code of XGDAG is available on GitHub at: https://github.com/GiDeCarlo/XGDAG. The data underlying this article are available at: https://www.disgenet.org/, https://thebiogrid.org/, https://doi.org/10.1371/journal.pcbi.1004120.s003, and https://doi.org/10.1371/journal.pcbi.1004120.s004.
Introduction
Gene-disease association (GDA) discovery is one of the main tasks in network medicine. The goal of computational methods in this field is to prioritize what genes are more likely to be associated with diseases. This is usually performed by leveraging network data, such as protein-protein interaction (PPI) networks and gene-disease networks. Among the most used PPIs, we find, for instance, BioGRID (Oughtred et al. 2019), HuRI (Luck et al. 2020), and STRING (Szklarczyk et al. 2021). In these networks, nodes are proteins (or genes) that are connected with each other if an interaction exists. For gene discovery purposes, these networks are extended with information on disease associations, for which databases such as DisGeNET (Piñero et al. 2017(Piñero et al. , 2020 and eDGAR (Babbi et al. 2017) are typically used.
Many gene detection techniques have been developed over the years. Among the most known approaches are DIAMOnD (Ghiassian et al. 2015) and DiaBLE (Petti et al. 2020), which rely on the concept of connectivity significance for finding new candidate disease genes. Other techniques, such as ProDiGe (Mordelet and Vert 2011) and DOMINO (Quinodoz et al. 2017), use machine learning to determine associated genes. Another approach, Markov clustering (MCL) (Enright et al. 2002, Sun et al. 2011, creates clusters by applying stochastic flow simulation in graphs, and genes in the same clusters of associated genes are considered candidates. Another line of work uses random walks with restart (RWR) (Kö hler et al. 2008, Valdeolivas et al. 2019 for the task of gene discovery. GUILD (Guney and Oliva 2012) leverages the paths interconnecting nodes corresponding to disease genes to derive topology-based rankings. ToppGene (Chen et al. 2009) makes use of a fuzzy similarity measure to compute the similarity between pairs of genes based on semantic annotations. Furthermore, gene discovery can be framed as a positive-unlabeled (PU) learning problem (Bekker and Davis 2020).
Differently from classic machine learning scenarios, in which a binary dataset consists of positive and negative samples, in PU learning instead of negative samples, we have a set of unlabeled instances, which can be regarded as a set of negative elements and some positive samples that have not yet been discovered. Different strategies approach gene discovery as a PU learning task by employing two-step techniques, such as PUDI (Yang et al. 2012), EPU (Yang et al. 2014), and, more recently, NIAPU (Stolfi et al. 2023).
Motivated by these previous studies, we frame gene prioritization as a PU learning problem. Given its performance, we rely on the NIAPU pipeline to define the node features and the label propagation system. Then, after the application of NIAPU, we train a GraphSAGE (Hamilton et al. 2017) model over the propagated labels. Finally, the explainability phase defines the explanation subgraph for associated genes that we use to expand the set of candidate genes for further analysis: we make the hypothesis that such genes may have newly associated genes, following the connectivity significance principle (Ghiassian et al. 2015), according to which a seed gene is likely to be connected to other seed genes. At first, we explore different explainable artificial intelligence (XAI) methods to determine the topperforming ones, and then we compare those selected with several state-of-the-art methods for disease gene identification. We call our proposed method XGDAG (eXplainable Gene-Disease Associations via Graph neural networks).
To the best of our knowledge, XGDAG is the first method to use an XAI-based solution in the context of PU learning for disease gene prioritization with graph neural networks (GNNs). The main contribution of the work lies in the novel use of the explainability results. Commonly, XAI is used as a passive tool to support and rationalize model decisions. In our case, explainability tools have an active role in the computation of the final ranking, given that the new candidate genes are directly extracted from the explanation subgraphs (see Section 3.3). This approach drastically diverges from previous attempts to use XAI for GNNs for a similar task. Indeed, Pfeifer et al. (2022) proposed the use of XAI to weight patient-specific PPIs before applying clustering for disease module detection. Even in this case, the use of XAI can be regarded as a support tool to enhance the output of other methods rather than an active tool to produce the final results.
Data sources and processing
We selected BioGRID (version: 4.4.206) as the PPI network for our experiments. We collected GDAs from DisGeNET (Piñero et al. , 2017(Piñero et al. , 2020 (version: 7.0), considering 10 diseases: malignant neoplasm of breast (disease ID C0006142), schizophrenia (C0036341), liver cirrhosis (C0023893), colorectal carcinoma (C0009402), malignant neoplasm of prostate (C0376358), bipolar disorder (C0005586), intellectual disability (C3714756), drug-induced liver disease (C0860207), depressive disorder (C0011581), and chronic alcoholic intoxication (C0001973). Disease selection and data cleaning criteria are the same as in Stolfi et al. (2023). In particular, we considered diseases with a high number of seed genes, to allow for coherent learning of the neural network. We filtered the PPI to save interactions only between Homo sapiens genes. After isolating the largest connected component of the network, we ended up having a PPI consisting of 19 761 genes and 678 932 undirected links. Regarding GDAs, we removed genes that were not in BioGRID, resulting in 1025 genes for disease C0006142, 832 for C0036341, 747 for C0023893, 672 for C0009402, 606 for C0376358, 451 for C0005586, 431 for C3714756, 320 for C0860207, 279 for C0011581, and 255 for C0001973. To train our deep learning model, we considered GDAs from the curated set of associations, which contains GDAs from reliable sources (UniProt Consortium 2015, Davis et al. 2019, Rehm et al. 2015, Martin et al. 2019, Tamborero et al. 2018, Gutiérrez-Sacristán et al. 2015. Instead, as we describe in Section 4, for the validation of our methodology, we rely on the set of all associations. This is an extension of the dataset composed of GDAs gathered from additional sources not considered in the curated set (Bundschus et al. 2008, 2010, Bravo et al. 2014, 2015, and forms a solid base to evaluate the discovery efficacy of computational methods. An in-depth structural analysis of network properties is available in the Supplementary Material. Figure 1. The XGDAG framework. A graph based on a PPI network and enriched with GDA information and node features is fed into a GNN. After the network has been trained, the predictions for the positive (P) genes are explained using an XAI methodology. Next, the nodes that appear in both the explanation subgraph and in the LP set are marked as candidate genes for prioritization.
Label propagation
Our dataset can be seen as a PU dataset, in which a gene can be associated with a disease (positive) or not (unlabeled). Because associations may exist but not been discovered yet, it is not safe to mark unknown associations as negative.
Moreover, PU datasets are usually highly unbalanced. In fact, only a small fraction of the entire set of genes in the interactome are associated with a given disease. Training on unbalanced datasets can negatively impinge on the performance of machine and deep learning models, and this results in the need for specific methods for unbalanced learning . For these reasons, label propagation procedures can be used to assign pseudo-labels to unlabeled instances, with a 2-fold benefit: avoid the bias introduced by setting the unlabeled instances as negative and obtain a more balanced dataset. NIAPU (Stolfi et al. 2023) uses a Markovian diffusion process to assign four pseudo-labels to unlabeled genes according to the likelihood of association: likely positive (LP), weakly negative (WN), likely negative (LN), and reliably negative (RN). To do that, it relies on disease-specific features that allow the proper identification of the different classes (the positive class P and the pseudo-classes). In particular, it assigns to each gene, for each disease, the following features: heat diffusion (Carlin et al. 2017), balanced diffusion, NetShort (White and Smyth 2003), and NetRing (Baronchelli and Loreto 2006). Differently from classic network measures (degree, betweenness centrality, etc.), which only depend on the graph topology and are the same regardless of the disease considered, these features are computed taking into account the seed genes (represented by the class P). For this reason, for each disease, we have a different set of features assigned to the genes which properly characterize the disease itself. The NIAPU label assignment pipeline is composed of six core steps. In the first step, a gene similarity matrix is built, relying on the aforementioned features. As a second step, the similarity matrix is simplified by removing edges with weak connections, excluding them from the label propagation process. Third, the starting probabilities for the Markovian diffusion process are initialized and the RN set is defined to be the set of genes that are furthest from the genes in P. The fourth step is the Markov diffusion process itself, which distributes label probabilities across the graph. In the fifth step, the stationary distribution of the Markov process is used to assign the rest of the pseudo-labels. The sixth and last step consists in training a machine learning model (a GNN, in our case) on the newly assigned labels. More details on the features used, their effectiveness in gene discovery, and the NIAPU algorithm can be found in the work of Stolfi et al. (2023) and in the Supplementary Material.
Graph neural network model and training
After the label propagation, we obtain a dataset in which previously unlabeled items are labeled with the most suitable pseudo-label. We next train a GraphSAGE (Hamilton et al. 2017) GNN model. This is an inductive learning procedure that learns the embedding of a node assuming that the nodes in the same neighborhood have similar features. It does that by learning aggregator functions that generate node embeddings relying upon a node's features and neighbors. A GraphSAGE layer, as defined in the PyTorch Geometric (Fey and Lenssen 2019) implementation we used, that generates the embedding x 0 i for node i, after the application of a nonlinear activation function r, has the following formula: where W 1 and W 2 are the weights learned by the neural network, x i is the feature vector for node i, N ðiÞ is the one-hop neighborhood of node i, and x j is the feature vector for the neighbor node j. The mean function aggregates information from all the neighboring nodes without applying any sampling. In our case, r is a ReLU function (Fukushima 1975). The use of this GNN is also suitable for dynamic graphs, as it is able to generate embeddings of new nodes without the need to retrain the model; only node features and neighbor node information is needed. Because a single layer aggregates information at a distance of one-hop and the diameter of our network is seven, we employ a seven-layer GraphSAGE GNN to gather the information flowing through the whole network. Working with deep GNNs may cause oversmoothing (Zhao and Akoglu 2020), which consists in the degradation of the model's performance as the number of layers increases. To guarantee that this does not occur in our case, we tested different architectures with different depths, obtaining the best performance with seven GraphSAGE layers (the results of the competitive study are available in the Supplementary Material). We trained the model using the Adam optimizer (Kingma and Ba 2015) with learning rate set to 1e À 3 and weight decay to 5e À 4 for a maximum of 40 000 epochs, employing an early stopping procedure when the loss reaches a plateau. To train the model, we split the dataset into training (70%), validation (15%), and test sets (15%), maintaining the balance of the classes between the sets. The performances of the GNN on the test set are summarized in Table 1.
Explainability phase
The next step, after the training of the model, is to explain its predictions. For that, we have tested several XAI techniques on top of XGDAG. These methods output a subgraph of the original graph, the explanation subgraph, which contains the most influential nodes for the prediction. Our method applies one explainability technique to the positive genes P. For each explained node n, we thus obtain the explanation subgraph G n . Every node in G n has an importance score assigned (which depends on the XAI method used). G n may contain nodes belonging to different pseudo-classes. To enhance the accuracy of the results, we filter G n by keeping only the genes that the GNN predicted to be LP, which are more likely to be associated genes according to the NIAPU labeling. We thus obtain a reduced explanation subgraph, the candidate subgraph G LP n . We repeat this process for every node in P. If a XGDAG node i appears in more candidate subgraphs, it is more likely to be associated with the disease, as per the connectivity significance property (Ghiassian et al. 2015). We take this into account as follows: we keep track of the number M i of subgraphs in which node i appears and of its cumulative importance score S i , obtained by summing all the importance scores s ij that node i has in the prediction of each node j-we assume that s ij ¼ 0 if i is not in G j . Every gene i is then assigned a tuple (M i , S i ). Finally, we obtain a ranking of candidate genes by sorting all the genes in the explanation subgraphs according to (M i , S i ). A graphical representation of the XGDAG prioritization mechanism is shown in Fig. 2. Explainability methods for GNNs. In our study, we made use of three XAI methods for GNNs. Each one of them relies on a different rationale to obtain explanation subgraphs. The first method is GNNExplainer (Ying et al. 2019), which established itself as the first explanation methodology for GNNs and it is still among the most used strategies for explaining GNN predictions. It works by learning a mask on the adjacency matrix by maximizing mutual information. Its output is a subgraph of nodes that are relevant for the prediction (along with a subset of node features). Its predictions are edge-oriented. Another method we used is GraphSVX (Duval and Malliaros 2021). It relies on a linear approximation of the concept of Shapley values (Shapley 1953) from game theory, which here are used as a proxy for node importance contribution. The use of Shapley values puts GraphSVX explanations on a solid and robust theoretical background. It delivers node-centric explanations. Finally, the third strategy is called SubgraphX (Yuan et al. 2021). It is the first methods to be focused on the research of explanation subgraphs only in terms of connected graphs, evaluating the importance that each of them has on the prediction. It exploits a Monte Carlo tree search to look for promising coalitions of connected nodes and computes a Shapley value approximation for each subgraph. The selected one is the subgraph associated to the highest Shapley value. The three methods explain the predictions leveraging the three different key components of a graph; edges, nodes, and subgraphs, respectively. This allows us to have comprehensive explanations of the GNN predictions.
To use XAI methods as independent tools for prioritization, we employ them in a PU learning setting. Indeed, we use them to explain models trained on binary PU data, devoid of any prior label propagation. As a result, they lack the assistance provided by the classes generated during the label propagation phase, which can be considered as a preliminary prioritization. Without the assistance of the LP class, the entire explanation subgraph is considered for prioritization without any node pooling. This introduces noise into the results and reduces the accuracy of the final ranking, as shown in Section 4 when comparing XGDAG-based variants with standalone XAI tools. In more detail, for any node n, the G LP n set is absent in standalone XAI-based prioritization; instead, we use the set G U n , which includes genes that are present in the explanation subgraph and that were predicted as unlabeled (U) by the GNN trained in the binary PU setting. Then, we proceed with the scoring and ranking criteria as proposed in Section 3.3. As mentioned earlier, using the entire set of genes predicted as unlabeled for prioritization introduces noise, as it may result in prioritizing genes that are highly unlikely to be associated with the disease, specifically the genes that would be predicted as RN by the GNN trained on the propagated labels. Conversely, the incorporation of label propagation in XGDAG brings additional value by facilitating the learning through pseudo-classes and assisting in the discovery of candidates through LP genes.
Results
To validate the obtained results, we performed both a numerical evaluation and an enrichment analysis. With the former, Figure 2. Graphical representations of the XGDAG prioritization mechanism. The output graph from the GNN is fed into an XAI method. For each P gene, we generate an explanation subgraph. This contains the nodes that were influential for the prediction of the node as P. We pool the subgraph by filtering out non-LP nodes, obtaining a final candidate subgraph. s ij is the importance score assigned by a given explanation method to i for the prediction of node j. Assuming the cumulative importance score for node C to be greater than the one of node A (S C > S A ), we obtain the gene raking in the picture, with G as the top-ranked node because it appears in two candidate subgraphs.
we compared, in terms of F1 score, the retrieval effectiveness of XGDAG with other methodologies for gene discovery; we compute the F1 score taking into consideration the number of associated genes in the set of all associations that each method is able to detect. Seed genes present in the curated set are not considered for this purpose-they were used as positive genes for the training. This validation setting allows us to test whether our model is able to retrieve genes that had been discovered by previous research. In enrichment analysis, we inspected whether the set of genes prioritized by XGDAG was connected with the diseases under examination, namely whether the genes were enriched in pathways, gene ontologies, or other diseases associated with the considered ones.
Numerical evaluation
First, in Fig. 3, we compare the performance of XGDAG against the single XAI methods on which it is based, used as standalone tools (here we show the F1 score-more comparison metrics are available in the Supplementary Material). Notice that the PU learning-based XAI approach achieves higher performances with respect to its plain-explainability counterpart. Indeed, the use of the pre-prioritization, obtained with the LP set from the label propagation phase, helps in the identification of the pool of possible new candidate genes.
We thus selected the best performing XGDAG variants in terms of overall F1 score. Given their at-par performance, we chose the GraphSVX-and the GNNExplainer-based approaches. We compared them against state-of-the-art methodologies for gene prioritization, namely NIAPU, DIAMOnD, MCL, RWR, two variants of GUILD (fFlow and NetCombo), and ToppGene. The plots in Fig. 4 show that XGDAG is more effective and robust than the other strategies. As we increase the number of retrieved genes, it is able to keep high the number of associated genes retrieved. On the contrary, methodologies such as DIAMOnD may be more effective in the retrieval when a small number of candidates are searched. However, they lose their reliability when higher numbers of candidate genes are considered, as also pointed out by DIAMOnD's designers (Ghiassian et al. 2015). In this, XGDAG proved to be the best solution even when looking for larger sets of candidate genes.
Results on a high-quality curated dataset
By inspecting the results, we noticed the very high accuracy of DIAMOnD on small sets of candidate genes. The dataset we used, even in its curated version, contains a relatively high number of associated genes, some of them not present in other manually curated datasets. We were interested in exploring whether training on datasets with a higher level of curation and smaller numbers of associated genes would change these results.
We performed this additional experiment using the highly curated dataset by Ghiassian et al. (2015). This is the dataset on which DIAMOnD was trained and evaluated in the original publication. The PPI network used here was built considering physical interactions validated experimentally and gathered from different sources, as by Menche et al. (2015). The GDAs were retrieved from OMIM (Online Mendelian Inheritance in Man) (Hamosh et al. 2005) and Genome-Wide Association Studies (GWAS) from PheGenI (Ramos et al. 2014). Because of the high-quality level of curation of these Figure 3. F1 score (y-axis) comparison for selected diseases (the remaining ones can be found in the Supplementary Material). The metrics are reported at increasing numbers of retrieved genes (x-axis). Dashed lines indicate the standalone XAI method and solid lines the XGDAG version. We notice that using explainability techniques on top of a PU learning prioritization strategy improves significantly the retrieval accuracy of the methods.
XGDAG
GDAs and PPI network, they were used in several gene prioritization experiments (Petti et al. 2021, De Luca et al. 2022, Gentili et al. 2022. We used the PPI and the GDAs of the aforementioned dataset, which we call OMIMþPheGenI dataset, to train the algorithms. We then validated the models on the GDAs from the all associations DisGeNET dataset. The goal was to first train the algorithms on high-quality and unbiased data and then test them on an external dataset. For this task, we considered the diseases in common between the two datasets: malignant neoplasm of breast (C0006142), colorectal carcinoma (C0009402), and liver chirrosis (C0023893). A comparative analysis of the F1 score is shown in Fig. 5-additional metrics can be found in the Supplementary Material.
The inspection of the results indicates that training on smaller but better curated datasets is beneficial for XGDAG, whereas DIAMOnD suffers from training on smaller sets of seed genes. This further highlights the robustness of XGDAG whose results are accurate even when the number of seed genes is small. However, the different results obtained when using different datasets demonstrate that data quality plays a major role in gene discovery and prioritization tasks and that a particular focus should be put on the definition of highquality GDAs and less biased interaction networks (Lazareva et al. 2021).
Enrichment analysis
As a further analysis to enhance the validity of our methodology, we checked whether the candidate genes retrieved from XGDAG were enriched in biological pathways, gene ontologies (GOs) (Ashburner et al. 2000), or other diseases related to the diseases of interest. We provide this analysis for the genes of the DisGeNET dataset prioritized by XGDAG-GNNExplainer. We considered the top 200 genes in our ranking as a reasonable cutoff. We performed the analysis using the Enrichr (Chen et al. 2013, Kuleshov et al. 2016, Xie et al. 2021 web tool and selecting the most statistically significant results according to Fisher's exact test. For disease C0006142 (malignant neoplasm of breast), several significant gene ontologies and pathways were found. Figure 6 shows the 10 most significant GOs for the biological process domain. Indeed, among the most significant GOs retrieved, protein modification was found to be a potential biomarker in breast cancer (Jin and Zangar 2009). Moreover, dysregulated programs in DNA transcription are related to certain behaviors in cancer cells (Bradner et al. 2017). Furthermore, apoptotic process regulation plays an important role in cancer progression and therapies (Reed 2003, Plati et al. 2011, Pfeffer and Singh 2018. Enrichment analysis proved genes retrieved by XGDAG to have meaningful associations to the disease. Summarized results for the 10 studied diseases providing the most enriched pathway, ontology, or associated disease and reference papers confirming the findings can be found in the Supplementary Material.
Discussion and conclusions
In this work, we propose a new methodology, XGDAG, which relies on PU learning, GNNs, and explainability to detect novel GDAs by providing a prioritization of candidates. (c) ( d) Figure 4. F1 score comparison for selected diseases for the two best-performing XGDAG variants (GNNExplainer and GraphSVX) with known gene discovery methodologies. We notice that when the number of retrieved genes is small the various approaches perform comparably. However, as the number of genes increases, XGDAG remains the most stable and robust method, whereas most of the compared strategies tend to become less accurate in the retrieval. More diseases can be found in the Supplementary Material, together with additional visualizations.
XGDAG uses a set of effective features defined in previous work (Stolfi et al. 2023) to enable PU learning by assigning pseudo-classes to unlabeled instances. This information is then leveraged by our GNN, which is able to generate network topology-aware embeddings that allow for high accuracy predictions. In this context, accurate but black-box models do not provide any additional information than what we already know about gene associations. Thus, given that the reliability of the explanations will depend on the quality of the model itself, an accurate model is the base from which we start our explanation phase. The application of several XAI techniques (among which GNNExplainer and GraphSVX are the most effective) opens the black box on the GNN by determining the most influential nodes for the prediction. Some of these nodes are present in the set of genes predicted as LP: these nodes are selected as new candidate genes. This is a novel use of XAI. Generally, the main goal of explainability is to gain insights into the decision process of a model. Diversely, in our approach, we exploit XAI methods to draw the final ranking of candidate genes, with the added value of having an interpretable output. This is a novelty that presents XAI not only as a tool that opens the black box of deep neural networks but also as an analysis component directly incorporated into the GDA discovery pipeline tasked with producing the final output.
The method outperforms state-of-the-art methodologies for gene discovery demonstrating the effective synergy of PU learning and explainability on GNN models. The XGDAG results are stable and robust, even considering large numbers of candidate genes.
It is interesting to point out that by using datasets with an in-depth level of manual curation, such as the one by Ghiassian et al. (2015), the retrieval performance of XGDAG increases, demonstrating both the robustness of the approach and the importance of curated data.
Additionally, enrichment analysis uncovers associated pathways, ontologies, and traits linked to the selected diseases, backing up the accuracy of the gene ranking obtained with XGDAG and further proving its effectiveness as a gene discovery strategy.
Our approach is based on the analysis of general graphstructured data, so it can be applied in various settings based on network modeling. Future directions can concentrate on the application of XGDAG on multiplex networks (Halu et al. 2019) and multi-omics data (Krassowski et al. 2020). Notably, datasets such as the Omics Discovery Index (Perez-Riverol et al. 2017, 2019, and ConsensusPathDB (Kamburov et al. 2009, Kamburov and Herwig 2022 combine information from proteomics, metabolomics, genomics, and other interaction networks; expanding the study to encompass this type of data can further enhance the insights acquired through our methodology. Finally, our study suggests that efforts can be put into the development of PU learning and XAI techniques devoted to GNNs for gene discovery purposes, giving the rewarding results that can be obtained by the joint use of such methods. The main limitation, as we observed in Section 4.1, is the requirement of high-quality data (Lazareva et al. 2021). This is of course shared by all data-based computational approaches; however, as more genes are discovered and validated, the results will be more trustworthy. | 6,741 | 2023-08-01T00:00:00.000 | [
"Computer Science",
"Medicine",
"Biology"
] |
The Art of Double-Betrayal, or What Comparatists Can Learn from Translators
philosophical topics as cultural globalization, untranslatability, among others. I argue that to prevent translation from becoming an explorative apparatus of hegemonic power that leads to cultural appropriation and domination, translators need to continuously stay aware of the fundamental and pre-originary care one innately bears towards the other. An ethics of translation – as that of comparative literature as a discipline – is constitutive of a non-national, non-identity, and non-subjective set of ethical principles that sustains and balances the powers coming from the host regions of related languages, while acknowledging the naturally irreconcilable and chronologically dynamic tensions in between. The very technicalities of the translating practice consist not only of the mechanical endeavor of pairing up words of various linguistic origins, but indeed an art of creating relations for a communicative understanding and experience of the foreign other, and simultaneously presenting a self-effaced invisibility of committed identity through a gesture of double-betrayal and resignation.
thereby help carry the content of one culture to the other while minimizing the loss of nuances in the process.
In light of the necessity of such an ethics of translation as a key to intercultural communications, this paper seeks to provide an ontological account of language that grounds the translation process and that can consequentially serve as a practical standard for comparatists, through inquiries into the seminal theoretical works of Jacques Derrida (1930Derrida ( -2004, Antoine Berman , and Emily Apter (1954-). I argue that to prevent translation from becoming an explorative apparatus of hegemonic power that leads to cultural appropriation and domination, translators need to continuously stay aware of the fundamental and pre-originary care one innately bears towards the other. The translation activity, henceforth, is essentially dualistic in that an ethical evaluation perpetually parallels and examines the intensification and direction of the fluidity of power. In addition, this essay aims to cultivate a new understanding of the Italian adage "translator, traitor": instead of casting a negative light on the occupation of translator, one might treat the inevitable "treachery" involved in the translating practice as an indication of the non-identical nature of the ethics of translation. Hence, I propose that the ethics of translation -as that of comparative literature as a discipline -is constitutive of a non-national, non-identity, and non-subjective set of ethical principles that sustains and balances the powers coming from the host regions of related languages, while acknowledging the naturally irreconcilable and chronologically dynamic tensions in between. The very technicalities of the translating practice consist not only of the mechanical endeavor of pairing up words of various linguistic origins, but indeed an art of creating relations for a communicative understanding and experience of the foreign other, and simultaneously presenting a self-effaced invisibility of committed identity through a gesture of double-betrayal and resignation.
Culture, world, and the task of comparatists
Of all the troubles and coercions that unequal power generates in multiple dimensions of human society, language seems to be most problematic, as the generative condition for any creation and event pertinent to the faculty of imagination, in line with Derrida's valorous and proper identification of the productive functioning of language as the generative ground of both literary and concrete events: "Who would contest that without languages, acts and events of language, no institution, no literature, no translation could have the least chance of appearing or of even being imagined?" (Derrida, 2008: 24) The very obstacle that comes unavoidably with the disseminating fashion of the life of language, however, lies primarily -if not entirely -in the impossibility of constructing an ideal type of language or writing that bears no traces of the writer's distinct signature or style inherited from his or her multiple cultural, historical, and psychological backgrounds. As Roland Barthes (1915Barthes ( -1980 aptly observes in his philosophical debut on language, Writing Degree Zero (1953), "it is impossible to write without labelling oneself " (Barthes: 1). Language, for Barthes, is understood as a diagram that composes a horizontal dimension of a generative field of expression; as well as a vertical dimension that necessarily imposes or marks a personal style on the order of words. As he puts it, A language is therefore a horizon, and style a vertical dimension, which together map out for the writer a Nature, since he does not choose either. The language functions negatively as the initial limit of the possible, style is a Necessity which binds the writer's humour to his form of expression.
( Barthes: 13) Barthes situates his reflections on the essence of language within the limit of individual writers, and thus he may have omitted the perilous consequences that such personal "labels" can bring forth when linguistic practices become a collective activity, namely, the potentiality of constructing a powerful order of discourse by an assemblage of individuals to suppress and subjugate others. Gilles Deleuze (1925-1995) and Félix Guattari (1930-1992 remind us, in their collaborative work A Thousand Plateaus (1980), of the structural nature of language and the enforcement of social order and obedience through the emphases on the dogmatic rules of language. They remark, "The elementary unit of language -the statement -is the order-word […] A rule of grammar is a power marker before it is a syntactical marker" (Deleuze,Guattari: 76). Apart from communicating and expressing, therefore, language, at the same time, executes a political or social function of practicing power and setting in motion the fluidity of power in the hierarchy. Thus, given the inclination and possibility of language to fall prey of the political regime, Deleuze and Guattari argue that an ongoing recurrence within the language in the spirit of Nietzsche is necessary in order to create a "line of flight" that deterritorialize the molar and static force that attempts constantly to formulate power hierarchies. Ronald Bogue (1948-) aptly distills and synthesizes the diagnoses and therapeutics of language in Deleuze and Guattari's philosophical reflections: "The line of flight ultimately is the trajectory of a process of becoming-other, the course of a line that always 'passes between'. The line of flight is both inside and outside language, and its presence is manifest in a constant tendency of language to move beyond itself " (Bogue: 6).
Such a self-reflective and self-regenerative model of language is concerned not only within certain particular cultural spheres but also the planetary conception of the global, with the latter being only more in need of consideration and theorization. In Remapping Knowledge: Intercultural Studies for a Global Age (2006), Mihai Spariosu (1944-) postulates a theory of the "local-global" -after decades of inquiries and witness of the transformation and turbulences in the process of globalization -to designate the status of cultural community in the age of globalization as a co-presence of both local and global elements, and emphasizes the necessity of establishing "a transdisciplinary field of intercultural studies" (Spariosu: 34) to surpass the limitations cast by conventional cultural models. This symbiotic cultural theory entails an inevitable and perennial tension between a self-sufficient and exclusive localism and an expanding and encompassing globalism; and during such a continuous conflict, culture emerges as a product. Spariosu warns against the will to the forceful implementation of doctrines for the purpose of forging a unitary globality, a dangerous idea he terms as "globalitarianism" (Spariosu: viii); as well as the radical resistance from the self-enclosed ideology of the local, typically presented in the form of terrorism. The new dynamism of global intercultural contact -envisioned by Spariosu with the lucid recognition of the emergence of technology and information science -should orient itself principally toward fostering the well-being of "any future human community" (Spariosu: ix) and interdependence through nonviolent dialogues. The advancement of Quantum Relations in theoretical physics -Spariosu argues -marshals a cluster of viable tools for the study of global cultural interactions, considering its adherence to such fundamental notions as alterity and adversity.
Such a theory of cultural globalization finds its foundation in Heidegger's (1889Heidegger's ( -1976 ontology, which -through an analysis of the thingy work of art as a microcosmic exemplar -highlights the essential strife between the world, "the self-disclosing openness of the broad paths of the simple and essential decisions in the destiny of an historical people", and the earth that "is self-secluding and to that extent sheltering and concealing" (Heidegger: 47). The holistic notion of Being (Sein) necessarily, for Heidegger, assumes instability and conflicts as the grounds for its very sustainability, "World and earth are always intrinsically and essentially in conflict, belligerent by nature" . The world parallels the notion of the local in cultural studies, though its openness belongs only to the historical community; whereas the earth corresponds to the universal tendency of cultural hegemony to appropriate the particular and the local. What is important in the opposing conflict between the open and concealment for Heidegger, is that the conflict serves as the source of truth, which "happens only by establishing itself in the conflict and sphere opened up by truth itself " (Heidegger: 59) and in turn addresses the significance of the studies of cultural interactions between the various locals on a global scale. However, even though Heidegger refrains from adding any negative connotations to the nature of conflict between the earth and the world, his belief in the process of obtaining Truth by opening up the other that naturally folds and encloses itself invites us to register the intensity of power involved and whether certain ethical limitations should be presumed, lest it give rise to any possible destruction or appropriation. The new paradigm of transdisciplinary intercultural studies situates the perpetual local-global conversations within the process of hylomorphic flow, formulated in the collaborative work of Deleuze and Guattari in Anti-Oedipus (1972), in which they postulate the composition of the world in a schizophrenic fashion as an infinite number of desiring machines that constantly produce a unificatory and imperceptible flow of both material and conceptual elements, without any supposition of beginning and ending. 1 The patterns of a cultural or ontological model of intercultural communication receive their echoes in the discipline of comparative literature studies. In other words, as a principle held by contemporary cultural studies and philosophy with a continental orientation, comparative literature engages the unavoidable tendency of globalization on both a theoretical and concrete path. Comparatists, ever since the founding of their discipline -what Claudio Guillén (1924Guillén ( -2007 terms as "the French Hour" -have functioned primarily as observers of these tensions and conflicts, inquiring into the literary influences and transmissions between national and cultural spheres, without a pre-determined posture of "cultural nationalism" and "narcissistic instincts" (Guillén: 4). Coincidentally and curiously, the thriving of the discipline depends on the emergence of modern national literatures that poses a challenge for the static foundation of the canon; hence Guillén concludes, "We then find ourselves before a fruitful historical paradox: the rise of nationalism will lay the foundation for a new internationalism" (Guillén: 27). The task of comparatists to juggle between the two cultural and political tendencies subsequently encounters a difficulty that is evident even in Guillén's rendering of the discipline as "supranational", as a discipline whose "[…] point of departure is not found in national literatures, nor in the interrelationships between them […]" (Guillén: 3), but rather, as suggested in Goethe's concept of Weltliteratur, from the vantage of the literati's awareness of the cosmopolitan co-existence of all contemporary literatures. The occupation of comparatists needs constant self-reflection; otherwise the "abuse of confidence" will eventually return in disguise -as, for example, in Eric Hayot's (1972-) analysis of the "Eurochronology" of modernism, which "does not have to be the result of any explicit eurocentrism" (Hayot: 6). Comparison here provides subtle excuses for ethics to unbind itself and don new rhetorical masks for arbitrary literary judgements. Comparatists, however, by no means stand alone on the moral front: faithful translators seem necessarily to deal with the conflicts of local-global interactions by the nature of their mission, and they have developed a techné of double-betrayal from which comparatists can learn.
Translation in the age of untranslatability
The urgency to incorporating a techné of double-betrayal -particularly for comparative studies in its most general sense, and the practitioners whose scope of work encompasses more than one language zones -is becoming more intense in the age of untranslability, a field established by such translation theorists as Barbara Cassin (1947-), Emily Apter (1954-), and Jacques Lezra (1960-), who -leaning toward the deconstructionist endeavor of Derrida andde Man (1919-1983) -postulate the impasses of untranslatable words as a new direction for comparative literature. Published first in French, the philologist and philosopher Barbara Cassin's Vocabulaire européen des philosophies: Dictionnaire des intraduisibles (2004) gathers some fine essays and analyses from theorists around the world and provides a map of untranslable terms in philosophy that may potentially jeopardize the universality and applicability of any philosophical attempt. Untranslatability, originally conceived in Diderot (1713-1784) and D'Alembert's (1717-1783) Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers (1751-1766) as the noumenal linguistic restrictions cast upon the human understanding with theological grounding (Apter: 119-120), receives an expanded explication in Cassin's Vocabulaire after more than two centuries, as linguistic aporias coming from "the logic of grammar, the limits of reference, the outer reach of thinkability or the difference between meaningful and meaningless propositions" (Apter: 11). The untranslatable turn in philosophy also generates resonances and contemplations on the previous models adopted in the study of comparative literature. Apter foresees such a possible influence and sketches an appendix to Cassin's groundbreaking work as well as a manifesto for reviving the discipline of comparative literature, in Against World literature: on the Politics of Untranslatablity (2012). Apter outlines the new challenge comparatists face after the publication of Vocabulaire européen des philosophies in her introduction. Though Apter lauds the efforts of scholars of World Literature for the "deprovicialization of the canon and the way in which, at its best, it draws on transition to deliver surprising cognitive landscapes" (Apter: 2), the monolingualization of literatures coming from various linguistic origins, Apter reminds us, runs the risk of the falsehood of believing in two radicalized poles of either "cultural equivalence and substitutability" among all cultures and nations or "the celebration of nationally and ethnically branded 'differences' that have been niche-marketed as commercialized 'identities'" (Apter: 3). Regarding the spirit of World Literature as "the entrepreneurial, bulimic drive to anthologize and curricularize the world's cultural resources" (Apter: 3), Apter maintains that the translatability assumption imbedded in the model of World Literature may "fall prey inevitably to the tendency to zoom over the speed bumps of untranslatability in the rush to cover ground" (Apter: 3). In light of this imminent aporia between certain literary scholars' ambition to establish a universal field of World Literature and the unsurpassable and unignorable untranslablity of concepts and untransferability of meanings, Apter seeks to directly face the problem itself and "activate untranslatability as a theoretical fulcrum of comparative literature with bearing on approaches to world literatures, literary world-systems and literary history, the politics of periodization, the translation of philosophy and theory […] as well as ethical, cosmological and theological dimensions of worldliness" (Apter: 3). What concerns Apter, therefore, is not only the methodologies that comparatists utilize to foster understandings between cultures; a co-existing ethical quest is also present and, as she criticizes, largely ignored by other scholars dealing with related subjects: "they fail to answer fully the challenge of making comparative literature geopolitically case-sensitive and site-specific in ways that avoid reproducing neo-imperialist cartographies" (Apter: 42). Thus, Apter suggests that we bypass the World Literature model and replace it with a dualistic and dynamic foundation of comparative literature with respect to translation studies, and as she presents the hypothesis "translation and untranslatability are constitutive of world forms of literature" (Apter: 16).
One may pause, while reflecting on Apter's critique of World Literature, to consider the causes of such a tendency of monolingualism and expansionism in the acts of attending to other culture and literature within the discipline of comparative literature. The initial ethical endeavor to care for and balance the disparity of power, in American academia, has been fostered in two major ideological and theoretical areas -multiculturalism and post-colonialismwhere, supposedly, counter-forces to the imperial and dominant power from the past centuries have been generated. Such wishful thinking, however, has perhaps led proponents of these tendencies astray and thus has become yet another instance of the situation they aim to tackle. Looking at the early twentieth century, one may identify the lineage of metamorphosis of the two principles into transnational studies, ethnic studies, etc. What coexists alongside this academic development is the phenomenon of academic starship and the affirmative territorialization of powerful discourses on the part of leading figures in the fields. Dorothy Figueira (1955-), in Otherwise Occupied (2008), incisively exposes such movements in the humanities, as well as the ingenuity of American monolinguialism in the construction of multicultural studies, and offers a thorough critique of the betrayed academic stars who appear to lead others the wrong direction. For Figueira, the penetration of power into universities and the politicalization of education have caused fundamental problems concerning the true beneficiaries of multiculturalism as well as the sincerity and authenticity of postcolonialism, resulting in a clownish showmanship, such that "The critic must self-fashion him-or her-self through imaginary marginalization resulting in the wide-ranging identification of a privileged class of academics with the marginalized Other" (Figueira: 36). The primary symptom Figueira diagnoses lies in the cultivation of a type of commodity fetishism in English departments, which tends to express sympathies and the care for the "exotic" other only through writings in the English language, a renewed and more finely disguised version of cultural hegemony as an aftermath of political strategies, "They not only offered English departments a politically correct identity but also provided additional market fields critical knowledge" (Figueira: 57). Such a rather well-adjusted and playful position of academics in a becoming-monolinguistic society is termed by Figueira the "brahminization of theory" (Figueira: 36), and it has proved a failure in addressing the imbalance of cultural disparity by ignoring the fundamental problem of the language.
The crisis in comparative literature caused by monoligualism, as Apter and Figueira have shown, thus demands a revitalization of the concept of the discipline, not as an attempt to construct territories of translated texts in the form of any particular language with the translatability assumption adopted in the model of World Literature, but as an endeavor to construct time-sensitive versions of world cartography that dynamically and constantly record and engage the alterations of the meanings of words. Such a vital demand for the proliferation, development and survival of diverse literatures engenders natural #1# symbiotic relations between comparative literature and translation studies. Endorsing and inspired by Barbara Cassin's theory of the untranslatables, Apter articulates a definition of comparative literature that is ever-evolving and in the process of changing: "Cassin's Vocabulaire […] gives rise to an idea of comparative literature as a discipline that derives its raison d'être from the constant updating and revision of vocabularies of cultural references; the better to serve as a kind of self-translating machine of the humanities" (Apter: 39). The processive model of comparative literature entails Apter's cautious awareness of the potential of cultural appropriation and intervention -determined and framed by the very nature of the discipline -that comparatists may initiate in the age of globalization, since comparative literature "is no more beset than other humanities fields by the constraints imposed by its historic subject fields (genres, periodizing frames, theoretical paradigms)" (Apter: 42), which have endowed comparatists with the power of judging and categorizing alien literature and culture, with certain presumed criteria. Apter reminds us of the "disciplinary paradox", coined by Nirvana Tanoukhi, that comparative literature as a discipline focusing primarily on national traditions, "depends for its existence on the entrenchment of nation-based geography" (Apter: 42). The clusters of national, cultural, and linguistic elements may lead to the construction of a value hierarchy that imposes subjective and unethical judgements on the content under analysis. After delineating the potential ethical crisis in comparative literature, Apter suggests that comparatists embrace and learn from translators, in particular their ways of treating the untranslatable concepts and traditions, which Apter calls a "translational model of Comparative Literature" to avoid any direct contact with the political sphere, since "their plurilingual composition embodies histories of language travel that do not necessarily reproduce imperial trajectories" (Apter: 42). Apter accurately underlines the interrelation between comparative literature and translation studies, and at the same time reminds us of the necessity of formulating an ethics of comparison that prevents interferences from any pre-established power structure. But what is such an ethics? What are the ethical principles that are mutually sharable between translators and comparatists? A detour to Derrida's theory of translation and its affiliation with comparative literature may suggest an outline for such an ethics.
Derrida and the ethics of translation
Who would have thought that the then unknown presenter Jacques Derrida at the 1966 conference under the theme "The Languages of Criticism and the Sciences of Man" hosted by Johns Hopkins University could so profoundly influence the modes of thought in countless humanities and social sciences disciplines and engender a fever of French Theory that has lasted to the present? Though trained in philosophy, Derrida has had an in the field of literary studies that is certainly not subsidiary and trivial. As Anne Tomiche ( Kujundžić (1959-). 2 The first text -through a reading of Nietzsche's (1844-1900) autobiography Ecce Homo (1888) -functions as an apology for Nietzsche's posthumous negative influence by arguing that the dissemination of the autobiography depends not on the author's own signature but the ear of the other who cosigns with differences in hearing or translating the original text. In Nietzsche's reflexive account of his life, one's birth immediately involves a "riddle" -two co-existing elements -the death of the father and the life of the mother: "I am, to express it in the form of a riddle, already dead as my father, while as my mother, I am still living and becoming old" (Derrida, 1985: 15).
The death of Nietzsche's father and the life of his mother at the moment he is born, together, help foster the sense of self in Nietzsche's course of life, which, in turn, leads to Derrida's association of Nietzsche's description of his life with the process in which one obtains an identity and becomes oneself. Such a process is represented through the development of the name: "There, this is who I am, a certain masculine and a certain feminine. Ich bin der und der, a phrase which means all these things. You will not be able to hear and understand my name unless you hear it with an ear attuned to the name of the dead man and the living feminine -the double and divided name of the father who is dead and the mother who is living on" (Derrida, 1985: 16). But the transferring and continuation of the name after death -the living, namely -depends not on the bearer of the name but on the persons who listen to the name and revive it in the process of infinite eternal return; hence, according to Derrida, one cannot ascribe to Nietzsche the atrocities that the Nazi perpetrated: "One can imagine the following objection: Careful! Nietzsche's utterances are not the same as those of the Nazi ideologues, and not only because the latter grossly caricaturize the former to the point of apishness" (Derrida, 1985: 30). In the following round table discussion on the subject of translation, Derrida points out that the translation activity shares a mechanism that is similar to the continuation of a proper name such as Nietzsche. There always co-presents a translatability for the lasting of the life of a word, and an untranslatability that prohibits any thorough and faithful delivery of the original meaning: "I would say that this desire is at work in every proper name: translate me, don't translate me. On the one hand, don't translate me, that is, respect me as a proper name, respect my law of the proper name which stands over and above all languages. And, on the other hand, translate me, that is, understand me, preserve me within the universal language, follow my law, and so on" (Derrida, 1985: 102). Derrida's explication of the process of translation is surely similar to his theory of hauntology articulated in the late work The Spectre of Marx (1993) that the present lives under the influence of the ghostly presence of the dead. Translation parallels such a mechanism by constituting a medium through which a spectrum of temporality stretches from past to future.
A sense of vitality exists inherently in the act of translating that pertains not only to the communications between two linguistic communities but also to very survival of texts, which serves as a theoretical ground for Emily Apter's time-sensitive and dynamic definition of comparative literature. Derrida's seminar at Yale University during 1979-1980 further underlines the inherent contradiction between comparative literature as a discipline and what Derrida considers "the essence of literature" (Derrida, 2008: 41); in other words, the untranslatability of proper names that come into being through the self-enclosed cultural community. As Dragan Kujundžić elaborates, "[…] literature keeps a secret to itself, in principle inaccessible to a comparison that a discipline of comparative literature, a violent opening of a secret in fact, would have exposed" (Kujundžić : 9). What initiates the lecture comes from the impasse of translation Derrida discovers in the very name of the comparative literature discipline between comparative literature (the English name) and compared literature (direct translation from French): I am a foreign 'visiting professor,' speaking in his own language, but within the enclosure of an American university department dominated linguistically by English and whose title is not 'compared literature' [littérature comparée], but, and I translate, 'comparative literature ' [littérature comparative]. Depending on the language, as you know, the concept that I name in French 'littérature comparée' receives titles or names, in the Western universities that have such a department, which translate into each other but which we would be mistaken to consider as strict equivalents. (Derrida, 2008: 25) Derrida foresees a necessary transformation in comparative literature, therefore, from "a discipline of study" to "literature [as]… literary practice." (Derrida, 2008: 43) Interestingly, both texts point to an impasse in translation studies, between the untranslatability of proper nouns that belong only to one particular language or literary work; and the requirement of the readability of such proper names in another language that does not bear pure references: In fact, there are two simultaneous demands governing the proper name which one must not be too quick to separate from each other: on the one hand, a requirement of untranslatability and unreadability, as if the proper name were nothing but pure reference, lying outside of signification and language; on the other hand, a requirement of translatability and readability, as if the proper name were assimilated to the common noun, to any word that is caught up in a linguistic and genealogical network where meaning already contaminate nonmeaning and where the proper name is absorbed and expropriated by the common noun. (Derrida, 1985: 93) Claude Lévesque even advances this aporia of translatability, in his response to Derrida's reading of Ecce Homo, by extending this unresolvable opposition to the political domain, between "nationalism and universalism" (Derrida, 1985: 93). Hence, translating and treating the untranslatable have become the seminal task of translators, and consequentially that task demands a new understanding and ethics of translation as literary practice.
Derrida derives a solution by exploring Benjamin's concept of "Überleben", translated as living on or survival as "the structure of the original" (Derrida, 1985: 121), of which the translator must be constantly aware and to which he must respond. In other words, before any translating practice, the translator must be equipped with a pre-originary duty and care for the very survival and growth of the original text, revealed as what Derrida calls the "translation contract", a commitment to "transforming the original as well as the translation" (Derrida, 1985: 122). The significance of Nietzsche's autobiography lies in the new understanding that the proper name -his own name -depends on two parts: the already dead and the living. Readers and translators, thus, react to and engage the living element of a text and aim at caring for and extending the duration of its survival: "To understand a text as an original is to understand it independently of its living conditions -the conditions, obviously, of its author's life -and to understand it instead in its surviving structure" (Derrida, 1985: 122). Benjamin (1892-1940) does a superb job of depicting and specifying the relationship between the life of the text and the translator who deals with the text: "Given the surviving structure of an original text -always a sacred text in its own way insofar as it is a pure original -the task of the translator is precisely to respond to this demand for survival which is the very structure of the original text. (Notice Benjamin does not say the task of translation but rather of the translator, that is, of a subject who finds him/herself immediately indebted by the existence of the original, who must submit to its law and who is duty-bound to do something for the original)" (Derrida, 1985: 122). It seems that Derrida's reading and interpretation of Benjamin comes from Levinas' discovery of the pre-originary ethics that constantly articulates "Thou shall not kill" as the primary order of ethics. Derrida then furthers the notion of duty for the survival of the text by stating that this "caring for" is fundamentally about admitting the very existence of language, which Derrida frames via his interpretation of Benjamin's well-known term "Die Reine Sprache", or pure language, denoting "not one which has been purified of anything; rather, it is what makes a language a language" (Derrida, 1985: 123). Derrida, however, does not merely posit certain abstract and ontological judgements; he also provides a set of criteria by which one can determine the ethical value of a translation activity: "translation augments and modifies the original, which, insofar as it is living on, never ceases to be transformed and grow. It modifies the original even as it also modifies the translating language" (Derrida, 1985: 122). By asserting these criteria, Derrida formulates a perpetual parallelism between the original work and its translations -the latter must grow with the former through self-updating and moving according to the evolution of the language into which it is translated. Respect for the existence of language, particular or univocal, may serve as the primal condition for Wai Chee Dimock's ambitious construction of "literary transnationalism", which argues for "the pluralingual dissemination of codes, genres, styles or ideas across the borders of time and territorial sovereignty" (Apter: 83). Being aware of and responsive to untranslatability also supports Apter's warning of the translatability assumption of World Literature or "oneworldliness" as a potential form of cultural hegemony.
However, as Derrida remarks, the act and process of translation between different languages will necessarily encounter aporias of untranslable words due to the deference of the proper name for its referent, which, surprisingly, constitutes the definition of comparative literature: "when a so-called proper name is not simply proper, when it maintains meaningful relations [rapports significants] with common nouns and the meaning meant [le sens visé] by common nouns, its resistance to translation carries with it entire regions of untranslabaility" (Derrida, 2008: 37). A proper name is by nature untranslatable, bearing a unique referent that could not be translated otherwise because of the limitation cast by its non-concepturalizable meaning. Throughout his essay, Derrida explores the fictional character Pangloss in Voltaire's' seminal novel Candide and delineates the phenomenon of the co-presence of both the untranslatability of the anomalous name as a fictional creation whose signifier harbors nothing outside the literary work; and an absolute translatability once the proper name Pangloss becomes is perceived from the perspective of the cluster of associative elements in the fiction and thus the proper noun reaches "as close as possible to a common noun; it is loaded with meaning and visibly reaches (in a figural or allegorical fashion, as you wish) well beyond its individual bearer, its novelistic character" (Derrida, 2008: 39). Pan-gloss, the forged word bearing the meaning of "all languages", functions thus as a model of Benjamin's pure language, with the seemingly ironic impasse of coexisting translatability and untranslatability, which, Derrida maintains, "is the figure of a thesis on translation and comparative literature" (Derrida, 2008: 39). By likening the situation of Pangloss to the studies of comparative literature, Derrida therefore renders the discipline as a proper name that includes -at the same time -"an open yet unique ensemble of {unique} individuals that exist according to a certain mode but that, as such, are unique" (Derrida, 2008: 50). The act of comparative literature takes on the infinite process of comparing within the collective ensemble of individual works and of altering perspectives from self to the other, and vice versa: "The minimal consensus, the character of the comparativists, is that one must compare literatures among themselves, literary phenomena among themselves, or in any case phenomena having an essential relation with literature among themselves. One must compare literature with literature" (Derrida, 2008: 50). Following Derrida, the transitions of roles between subjective and objective angles in the very essence of comparative literature demand a new ethics that is capable of perpetually sustaining the balance of power.
From double-traitor to "the general man"
Basing his inquiries into the history of translation on the German experience during the romantic period, Antoine Berman attempts to construct a self-reflective theory of translation that has an ethics at its core, given the belief that a translator needs to cultivate a consciousness of "what translation must mean in our cultural setting today" (Berman: 4). Such cultural sensitivity that attends to two cultures with at least linguistic difference unavoidably has to solve the accusation from the Italian adage "traddutore, tradditore" -translator traitor. Berman invokes Franz Rosenzweig's (1886Rosenzweig's ( -1929 well-known claim that the occupation of the translator is to serve two masters: the culture where the original text resides, and the culture of their own mother tongue. But such a position simultaneously makes it easy for the translator to fall under the accusation from either side, and even worse, being sympathetic toward two masters will situate translators as double-traitors to both. Hence translators find themselves constantly oscillating between imposing the other culture on their own and opening the other culture whose openness belongs only to its own historical community. As Berman asserts, "The very aim of translation -to open up in writing a certain relation with the Other, to fertilize what is one's Own through the mediation of what is Foreign -is diametrically opposed to the ethnocentric structure of every culture" (Berman: 4). And consequentially, "translation occupies an ambiguous position. One the one hand, it heeds this appropirationary and reductionary injunction, and constitutes itself as one of its agents […] on the other hand, the ethical aim of translating is by its very nature opposed to this injunction: The essence of translation is to be an opening, a dialogue, a cross-breeding, a decentering" (Berman: 4). With respect to the ethics of translation, Berman calls for the conception of the fidelity of the translator as an affirmative and defensive position, in a spirit more pertinent to science than art. In addition, given the differences between languages and the inevitable change -on both the levels of language and text -taking place in the act of translating, Berman insists that one should also raise the analytical awareness of the "'gains' and 'losses' manifested in all translations, even successful ones" (Berman: 6), while, echoing Schleiermacher, maintaining a sense of respect not for the culture but the language that "posits other language as ontologically superior to the translator's own language" (Berman: 8). The task of neutralizing pressures from both cultures and insisting on an ethics of translation with primary respect for pure language, in line with Berman's ethical and analytical model, seems doomed to fall into a moral dilemma between cultural fidelity and translation ethics.
I wish to point out, in conclusion, that the translation ethics of caring and duty for language qua language already signifies a transcendence beyond any national and cultural borders to the zone of indiscernible identities; and transforms translators into unbound cosmopolitan general humans, or better, helps them take on the molecular line of flight of becoming-imperceptible of their own identity in the process of translating. The art of double-betrayal, thus, lies in the capability of initiating and facilitating cultural communication while serving the two masters well, invisibly, by positioning the translator herself as a Derridian aporia that is both present-functioning and absent -unsigning cultural codes. The translator is both particular and universal. Here I want to cite a poem titled "The Faith That Matters", by French poet and multilingual translator Armand Robin (1912Robin ( -1961, whom Antoine Berman regards as a translator of "omnipotent cosmopolitanism": I am not Breton, French, Latvian, Chinese, English I am all that at once. I am the universal and general man of the entire world. (Berman: 138) In the same vein, the ethical stance of comparatists in the act of comparing literature with literature by constantly changing subjective and objective dispositions -as Derrida notes -may also require a self-effacing and becoming-imperceptible of the one who compares, situating oneself in the process of displaying, as Guillén hopes, "the elucidation of the cross-fertilizations and other grafts that link these subspecies and give rise to their mutations, hybridization, and growth […] with a firm belief in the uniqueness of the character of each people" (Guillén: 36). philosophical topics as cultural globalization, untranslatability, among others. I argue that to prevent translation from becoming an explorative apparatus of hegemonic power that leads to cultural appropriation and domination, translators need to continuously stay aware of the fundamental and pre-originary care one innately bears towards the other. An ethics of translation -as that of comparative literature as a discipline -is constitutive of a non-national, non-identity, and non-subjective set of ethical principles that sustains and balances the powers coming from the host regions of related languages, while acknowledging the naturally irreconcilable and chronologically dynamic tensions in between. The very technicalities of the translating practice consist not only of the mechanical endeavor of pairing up words of various linguistic origins, but indeed an art of creating relations for a communicative understanding and experience of the foreign other, and simultaneously presenting a self-effaced invisibility of committed identity through a gesture of double-betrayal and resignation. | 8,982.4 | 2019-01-01T00:00:00.000 | [
"Philosophy",
"Linguistics"
] |
Dissecting the Genomic Architecture of Resistance to Eimeria maxima Parasitism in the Chicken
Coccidiosis in poultry, caused by protozoan parasites of the genus Eimeria, is an intestinal disease with substantial economic impact. With the use of anticoccidial drugs under public and political pressure, and the comparatively higher cost of live-attenuated vaccines, an attractive complementary strategy for control is to breed chickens with increased resistance to Eimeria parasitism. Prior infection with Eimeria maxima leads to complete immunity against challenge with homologous strains, but only partial resistance to challenge with antigenically diverse heterologous strains. We investigate the genetic architecture of avian resistance to E. maxima primary infection and heterologous strain secondary challenge using White Leghorn populations of derived inbred lines, C.B12 and 15I, known to differ in susceptibility to the parasite. An intercross population was infected with E. maxima Houghton (H) strain, followed 3 weeks later by E. maxima Weybridge (W) strain challenge, while a backcross population received a single E. maxima W infection. The phenotypes measured were parasite replication (counting fecal oocyst output or qPCR for parasite numbers in intestinal tissue), intestinal lesion score (gross pathology, scale 0–4), and for the backcross only, serum interleukin-10 (IL-10) levels. Birds were genotyped using a high density genome-wide DNA array (600K, Affymetrix). Genome-wide association study located associations on chromosomes 1, 2, 3, and 5 following primary infection in the backcross population, and a suggestive association on chromosome 1 following heterologous E. maxima W challenge in the intercross population. This mapped several megabases away from the quantitative trait locus (QTL) linked to the backcross primary W strain infection, suggesting different underlying mechanisms for the primary- and heterologous secondary- responses. Underlying pathways for those genes located in the respective QTL for resistance to primary infection and protection against heterologous challenge were related mainly to immune response, with IL-10 signaling in the backcross primary infection being the most significant. Additionally, the identified markers associated with IL-10 levels exhibited significant additive genetic variance. We suggest this is a phenotype of interest to the outcome of challenge, being scalable in live birds and negating the requirement for single-bird cages, fecal oocyst counts, or slaughter for sampling (qPCR).
Coccidiosis in poultry, caused by protozoan parasites of the genus Eimeria, is an intestinal disease with substantial economic impact. With the use of anticoccidial drugs under public and political pressure, and the comparatively higher cost of live-attenuated vaccines, an attractive complementary strategy for control is to breed chickens with increased resistance to Eimeria parasitism. Prior infection with Eimeria maxima leads to complete immunity against challenge with homologous strains, but only partial resistance to challenge with antigenically diverse heterologous strains. We investigate the genetic architecture of avian resistance to E. maxima primary infection and heterologous strain secondary challenge using White Leghorn populations of derived inbred lines, C.B12 and 15I, known to differ in susceptibility to the parasite. An intercross population was infected with E. maxima Houghton (H) strain, followed 3 weeks later by E. maxima Weybridge (W) strain challenge, while a backcross population received a single E. maxima W infection. The phenotypes measured were parasite replication (counting fecal oocyst output or qPCR for parasite numbers in intestinal tissue), intestinal lesion score (gross pathology, scale 0-4), and for the backcross only, serum interleukin-10 (IL-10) levels. Birds were genotyped using a high density genome-wide DNA array (600K, Affymetrix). Genome-wide association study located associations on chromosomes 1, 2, 3, and 5 following primary infection in the backcross population, and a suggestive association on chromosome 1 following heterologous E. maxima W challenge in the intercross population. This mapped several megabases away from the quantitative trait locus (QTL) linked to the backcross primary W strain infection, suggesting different underlying mechanisms for the primary-and heterologous secondary-responses. Underlying pathways for those genes located in the respective QTL for resistance to primary infection and protection against heterologous challenge were related mainly to immune response, with IL-10 signaling in the backcross primary infection being the most significant. Additionally, the identified markers associated
INTRODUCTION
Coccidiosis is an intestinal disease caused by intracellular protozoan parasites of the genus Eimeria . The control of coccidiosis is a challenge to the international poultry industry, with economic losses estimated at USD 3 billion annually (Dalloul and Lillehoj, 2006). Current control of coccidiosis relies on the prophylactic use of anticoccidial drugs, or vaccination with formulations of live wild-type or attenuated parasites (Crouch et al., 2003;McDonald and Shirley, 2009). However, use of some anticoccidial drugs has been curtailed by legislation, while the limited production capacity and costs of live attenuated vaccines compromise their utility in broiler flocks . Thus, there is a need for complementary strategies to control coccidiosis in poultry. A promising approach would be to breed chickens for increased genetic resistance and increased vaccine response to Eimeria parasitism since there is evidence for relevant host genetic variation (Johnson et al., 1986;Bumstead and Millard, 1992).
Coccidiosis in poultry is caused by seven distinct Eimeria species (Reid et al., 2014), with Eimeria maxima being one of the most common causes of coccidiosis in commercial broilers. Immunity introduced by primary infection (vaccination) against E. maxima is commonly strain-specific, with immune escape contributing to sub-clinical coccidiosis symptoms that include decreased feed conversion efficiency, marked weight loss and low performance (Fitz-Coy, 1992;Blake et al., 2005). Johnson et al. (1986) demonstrated variance in coccidiosis susceptibility in chickens as a prerequisite to selective breeding for resistance. A subsequent study using several inbred White Leghorn lines established variance for benchmark phenotypes when chickens were infected with controlled doses of Eimeria spp. Millard, 1987, Bumstead andMillard, 1992). The between-line variation observed in oocyst production by the different lines was not correlated with weight loss or mortality, indicating that within-trait observations were a result of effect accommodation rather than parasite restriction. The greatest differences in parasite replication (PR) were between lines 15I and C major histocompatibility complex (MHC) haplotype B12 (C.B12) chickens that produced relatively high and low numbers of oocysts, respectively (Bumstead and Millard, 1987;Smith et al., 2002). Most notably, primary infection with the Houghton or Weybridge reference E. maxima strains induce 100% protection against secondary homologous challenge in 15I and C line chickens (Smith et al., 2002). However, the outcome of heterologous challenge varied by parasite strain and host genotype combination (Smith et al., 2002;Blake et al., 2004Blake et al., , 2005. Regardless of the substantial financial losses to industry caused by coccidiosis, few studies have attempted to identify quantitative trait loci (QTL) for resistance to E. maxima infection and there are no relevant studies on the genetics of heterologous secondary challenge response.
The present study extends previous work in inbred chicken lines to determine the genetic architecture of E. maxima resistance, i.e., lack of PR, and protection against secondary challenge with a heterologous E. maxima strain. First, an F2 intercross of inbred White Leghorn chicken lines C.B12 × 15I were initially infected with E. maxima H, followed 3 weeks later by challenge with E. maxima W to investigate response to challenge with the heterologous strain. Fecal oocyst output was counted to determine severity of challenge. Second, a backcross population from the same two inbred lines [(C.B12 × 15I) × C.B12] was infected with E. maxima W to study primary resistance to parasitism. Three phenotypes were determined for these birds following infection: PR by qPCR for parasite numbers in intestinal tissue, intestinal lesion score (LS) (gross pathology, scale 0-4) and levels of serum interleukin-10 (IL-10), a novel biomarker, found to be positively correlated with the pathology trait in chickens infected with E. tenella (Wu et al., 2016;Boulton et al., 2018). All birds were then genotyped using a 600K Affymetrix R Axiom R HD array (Kranis et al., 2013), enabling genome-wide association studies (GWASs), followed by pathway analysis to identify candidate genomic regions, pathways, networks and genes for resistance to E. maxima primary infection and effective responses to challenge with a heterologous strain.
Ethics Statement
These trials were conducted under Home Office Project Licence in accordance with Home Office regulations under the Animals (Scientific Procedures) Act 1986 and the guidelines set down by the Institute for Animal Health and RVC Animal Welfare and Ethical Review Bodies.
Parasites
The E. maxima Houghton (H) and Weybridge (W) strains were used throughout these studies (Norton and Hein, 1976). Routine parasite passage, sporulation, and dose preparation were undertaken as described previously (Eckert et al., 1995) using specific pathogen free Light Sussex or Lohman LSL chickens. Oocysts were used within 1 month of harvest.
Animals
Inbred chicken lines 15I and C derived from White Leghorn flocks at USDA-ARS Avian Disease and Oncology Laboratory in East Lansing, MI, United States, were maintained by random mating within the specified-pathogen-free (SPF) flocks at the Pirbright Institute [formerly the Institute for Animal Health (IAH)], United Kingdom since 1962 and1969, respectively. F2 intercross birds (n = 195) were generated by crossing nine F1 (C.B12 × 15I) male progeny with 27 unrelated F1 female progeny at the IAH (Compton site). Six birds from each of the two parental lines, 15I and C.B12, were also hatched and kept under the same experimental conditions as F2 (individual cages post-challenge).
To generate the backcross (n = 214), 20 F1 (C.B12 × 15I) male progeny were crossed with 100 unrelated C.B12 line females. The breeding was performed in the SPF Bumstead facility at the Roslin Institute, The University of Edinburgh, United Kingdom. Day old chicks were transported in isolated SPF containment to the Royal Veterinary College poultry barn, University of London, United Kingdom, where the primary infection with E. maxima W sporulated oocysts were conducted in floor pens.
Intercross Population
F2 intercross (n = 195), and 12 parental line birds were initially infected by oral gavage with 100 sporulated oocysts of E. maxima H at 25 days of age and moved to individual cages. Feces were collected from each bird on a daily basis during the 5-10 days post-challenge (pi) period following infection. Three weeks later (47 days of age) a secondary challenge was initiated by oral gavage of 250 sporulated oocysts of E. maxima W. Feces were again collected from each bird on a daily basis during the 5-10 day post-challenge period.
Backcross Population
At 21 days of age, chickens were inoculated by oral gavage with either 1 ml distilled water (control group, n = 20) or 100 sporulated oocysts of E. maxima W (infected group, n = 194). To avoid cross-infection the control group was housed separately. Birds were euthanised humanely at day 7 pi, coinciding with the peak pathological effects of E. maxima (Rothwell et al., 2004), providing the greatest sensitivity for parasite genome detection (Blake et al., 2006). A blood sample from each bird was collected post-mortem via aortic rupture into 1.5 ml Sigma-Aldrich (Dorset, United Kingdom) microcentrifuge tubes. Bijou tubes (7 ml Sterilin TM ) containing 5-10 volumes of room temperature RNAlater R (Life Technologies, Carlsbad, CA, United States) were used to store 5.0 cm of intestinal tissue and content from either side of Meckel's diverticulum.
Phenotyping
Individual oocyst output was used to study the outcome of the E. maxima H primary infection and secondary heterologous E. maxima W challenge in the intercross chicken population. Oocysts were quantified daily (5 to 10 days post-infection and challenge) using a microscope and saturated salt flotation in a McMaster counting chamber (Eckert et al., 1995;Smith et al., 2002). Daily totals were combined to provide a total count for oocyst output per bird for both the primary infection and secondary challenge. Oocyst counts were log-transformed to approximate normal distribution.
The phenotypes used to study resistance to E. maxima W primary infection in the backcross population were relative intestinal Eimeria genome copy number (PR, measured using quantitative PCR as parasite genomes per host chicken genome), intestinal LS (pathology, on a scale 0-4), and serum IL-10 level (IL-10). Quantitative real-time PCR targeting the E. maxima microneme protein 1 (EmMIC1) and Gallus gallus β-actin (actb) loci was performed using total genomic DNA extracted from a 10 cm length of intestinal tissue centered on Meckel's diverticulum using a DNeasy Blood and Tissue kit (Qiagen, Hilden, Germany). Briefly, each complete tissue sample was disaggregated using a Qiagen TissueRuptor and an aliquot was processed for extraction of combined host and parasite DNA (see Blake et al., 2006, for full details). A CFX96 Touch R Real-Time PCR Detection System (Bio-Rad Laboratories, Hercules, CA, United States), was used to amplify each sample in triplicate (Nolan et al., 2015), with an additional Bead-Beater homogenization step prior to buffer ATL treatment (including 1 volume 0.4-0.6 mm glass beads, 3,000 oscillations per minute for 1 min). Intestinal pathology was assessed by the same experienced operator scoring lesions according to Johnson and Reid (1970). A capture ELISA was used to measure IL-10, employing ROS-AV164 and biotinylated ROS-AV163 as capture and detection antibodies, respectively (see Wu et al., 2016, for full details). IL-10 levels and parasite genome numbers were log-transformed to approximate normal distribution.
Phenotypic Correlations
Following log-transformation for PR and IL-10, all backcross phenotypic traits were rescaled to modify the unit of measurement differences. Then, fitting host sex as a fixed effect in a multivariate linear model, phenotypic correlations (r P ) were estimated using ASReml 4.1 (Gilmour et al., 2015).
Genome-Wide Association Studies
Sixty-seven F2 birds exhibiting the most extreme phenotypes, plus the 12 intercross parental line birds and the entire backcross generation were genotyped using the 600K Affymetrix R Axiom R HD genotyping array (Kranis et al., 2013). Although each data set was analyzed separately, the same GWAS steps were used for both populations. The marker genotype data were subjected to quality control measures using the thresholds: minor allele frequency < 0.02 and call rate > 90%. Deviation from Hardy-Weinberg equilibrium was not considered a reason for excluding markers since these were experimental populations of inbred lines. After quality control 203,845 intercross and 204,072 backcross markers remained and were used, respectively, to generate separate intercross and backcross genomic relationship matrixes (GRMs) to investigate the presence of population stratification. Next, each GRM was converted to a distance matrix that was analyzed with a classical multidimensional scaling using the GenABEL package of R (Aulchenko et al., 2007) to obtain principal components. These analyses revealed three principal components in the intercross population (one for each parental line and one for F2 birds), but no substructure in the backcross. GWAS for each trait were then conducted using GenABEL based on a mixed model, with the population principal components fitted as a co-variate (intercross population only), sex fitted as a fixed effect in both studies, and GRM fitted as a random polygenic effect to adjust for population sub-structure. In the case of GWAS for heterologous secondary challenge response, the oocyst output following the first challenge was also fitted as a covariate to account for the effect of the first challenge. After Bonferroni correction for multiple testing, significance thresholds were P ≤ 2.45 × 10 −7 and P ≤ 4.90 × 10 −6 for genome-wide (P ≤ 0.05) and suggestive (namely one false positive per genome scan) significant levels, respectively corresponding to −log 10 (P) of 6.61 and 5.30. The extent of linkage disequilibrium (LD) between significant markers located on the same chromosome regions was calculated using the r-square statistic of PLINK v1.09 (Purcell et al., 2007).
Effects of the significant markers identified in each GWAS were re-estimated in ASReml 4.1 (Gilmour et al., 2015) by individually fitting the markers as fixed effects in the same model as used for GWAS analyses. Effects were calculated as follows: additive effect, a = (AA -BB)/2; dominance effect, d = AB-((AA + BB)/2), where AA, BB, and AB were the predicted trait values for each genotype class.
All significant markers identified in GWAS for responses to primary infection and secondary E. maxima W challenge were mapped to the reference Gallus gallus domesticus genome and annotated using the variant effect predictor 1 tool within the Ensembl (genome browser 92) database and the Gal-gal5 assembly 2 . Furthermore, genes located within 100 kb up-and down-stream of the significant markers were annotated using the BioMart data mining tool 3 and the Gal-gal5 assembly. This method of annotation enabled all genes located in the vicinity of the identified significant markers to be identified and cataloged.
Re-sequencing Data Analysis
To identify possible protein-coding genes associated with the detected QTL, genomic sequences in the regions of interest from the line 15I and C.B12 chickens were compared. The two parental chicken lines were entirely re-sequenced at 15-20 fold coverage, using pools of 10 individuals per line, performed on an Illumina GAIIx platform using a paired-end protocol (Krämer et al., 2014). Re-sequencing data of the candidate regions (i.e., 1 kb up-and downstream of the candidate gene end sites), for resistance to primary infection and heterologous challenge derived from intercross and backcross analyses, were then extracted and examined separately. Using the Mpileup tool for marker calling (SAMtools v0.1.7; Li et al., 2009), single nucleotide variants (SNVs) between the two parental lines and the reference genome in these regions were detected. These were then annotated using the same variant effect predictor software as above. Information for all SNV [intergenic, intronic, exonic, splicing, 3 and 5 untranslated regions (3 UTR, 5 UTR)] present in the regions of interest were collated. Intergenic, intronic, and 1 http://www.ensembl.org/Tools/VEP 2 https://www.ncbi.nlm.nih.gov/assembly/GCF_000002315.4/ 3 http://www.ensembl.org/biomart/martview/ exonic synonymous variants were then filtered out along with SNV that were common in the two parental lines but different from the reference genome. Thus, only sites that were different between the parental lines and had an effect on the coding sequence (nonsense, missense, splicing) or a potential effect on the gene expression (3 UTR and 5 UTR) were retained for further study.
Pathway, Network, and Functional Enrichment Analyses
Identification of potential canonical pathways and networks underlying the candidate genomic regions associated with outcomes of primary infection and heterologous secondary E. maxima challenge were performed using the ingenuity pathway analysis (IPA) program 4 . IPA constructs multiple possible upstream regulators, pathways, and networks that serve as hypotheses for the biological mechanism underlying the phenotypes based on a large-scale causal network derived from the Ingenuity Knowledge Base. After correcting for a baseline threshold and calculating statistical significance, the most likely pathways involved are inferred (Krämer et al., 2014). The constructed networks can then be ranked using their IPA score based on the P-values obtained using Fisher's exact test [IPA score or P-score = −log 10 (P-value)].
The gene lists for each phenotype were also analyzed using the Database for Annotation, Visualization and Integrated Discovery (DAVID; Dennis et al., 2003). To understand the biological meaning behind these genes, gene ontology (GO) was determined, and functional annotation clustering analysis was performed using the integral G. gallus background. The enrichment score (ES) of DAVID is a modified Fisher exact P-value calculated by the software, with higher ES reflecting more enriched clusters. An ES > 1 means that the functional category is overrepresented.
Descriptive Statistics
Phenotypic distributions for oocyst counts following primary infection with E. maxima H and secondary challenge with E. maxima W in the intercross and parental populations along with relative DNA and IL-10 levels in the backcross populations after primary infection with E. maxima W are presented in Figures 1A-C. After primary infection the pure line C.B12 birds produced fewer E. maxima oocyst counts compared to the pure line 15I and F2 birds, with the highest oocyst output recorded in the pure line 15I group. Conversely, inverse findings regarding oocyst output were recorded in the two parental lines following heterologous secondary strain challenge. These results agree with previous findings that show line C.B12 birds develop no cross protection between primary H and secondary W strain challenges, while line 15I birds develop significant cross-protection when infected in this order (Smith et al., 2002;Blake et al., 2005). As expected, for both primary and Among the backcross chickens, following infection with E. maxima W, phenotypic scores for intestinal lesions were low (0-2), however significant variance (P = 0.05) was noted ( Table 1). Estimated phenotypic correlations between the three measured traits ranged from 0.8 to 0.15, with only the correlation between LS and IL-10 being statistically significant (r LS,IL−10 = 0.15 ± 0.07; Figure 1D and Table 1).
Intercross Study
Genome-wide association study analysis for oocyst output following primary infection of the intercross population with E. maxima H did not reveal significant associations after the strict Bonferroni correction. However, an association with markers on chromosome 2, just below the suggestive threshold was reported (results not shown). GWAS analysis following secondary challenge with the heterologous E. maxima W strain identified 11 markers on chromosome 1, all having suggestive associations with the trait in the intercross population. These 11 markers belonged to the same LD block (499 bp, r 2 = 1; Figure 2 and Table 2). The corresponding Q-Q plot for the GWAS intercross result is found in Figure 2.
The 11 significant markers associated with the outcome of secondary challenge by the heterologous E. maxima strain were all located in intronic, upstream, and downstream regions of the phenylalanine hydroxylase (PAH) gene (Supplementary Table S1). In the 0.5 Mb candidate region for enhanced response to heterologous secondary E. maxima challenge only 16 protein coding genes were located (Supplementary Table S2).
Backcross Study
Genome-wide association study results for resistance to E. maxima W primary infection in the backcross population revealed several of significant genomic associations for each of the measured phenotypes. However, there was no overlap of the candidate genomic regions linked to parasite reproduction, intestinal pathology, or IL-10 induction (Figure 3 and Table 3). Specifically, a single marker on chromosome 3 had a suggestive association with PR ( Figure 3A and Table 3). Four suggestive marker associations were identified with markers on chromosomes 1, 2, and 3 for intestinal pathology (i.e., lesion FIGURE 2 | (A) Manhattan and (B) corresponding Q-Q plot for GWAS for oocyst output measured from the intercross chickens following heterologous secondary challenge. The -log 10 P-value (on the y axis) indicating genome-wide significance is represented by the red line, while the blue line represents suggestive significance. The positions of the markers analyzed for the 28 main chicken autosomes (1-28) plus the sex chromosomes Z and W (29 and 30 respectively) and microchromosomes (31), are represented on the x axis. In (B), the expected chi-squared (χ 2 ) values are plotted on the x axis, whereas the observed χ 2 values are presented on the y axis, with the red line indicating the anticipated slope. damage; Figure 3B and Table 3). A further four associations were found for IL-10 on chromosomes 1, 2, and 5 ( Figure 3C and Table 3). None of the markers found on chromosome 2 for LS and IL-10 were in common, nor were they in LD. However, the candidate QTL region for IL-10 on chromosome 2 was in proximity with an intercross marker found following primary infection with E. maxima H in the intercross population that falls below the suggestive threshold. The corresponding Q-Q plots for GWAS are displayed in Figure 4. All significant markers identified in both studies exhibited significant (P < 0.01) additive genetic effects ( Table 3).
All of the significant markers identified for resistance to primary E. maxima W infection in the backcross population were located in intronic or intergenic regions (Supplementary Table S3). The candidate regions for response to primary E. maxima W infection contains a small number of genes: 36 protein-coding genes and four microRNAs (Supplementary Table S4).
Resequencing Analysis
In total, 3,230 variants were identified in the candidate regions associated with resistance to primary E. maxima infections. SNV located in exonic regions accounted for less than 3% of the total, while the remaining SNV (97%) were located in intronic, upstream, and downstream regions. Genes with SNVs that could potentially lead to non-functional transcripts were not detected. However, six genes contained missense SNVs that may affect the function of the encoded proteins. More specifically, LONRF2, CHST10, PDCL3, and TBC1D8 genes on chromosome 1, FAM69C on chromosome 2, and IPCEF1 on chromosome 3 had missense with moderate effect SNVs. Also, these genes contained 3 /5 UTR variants that may affect the expression of these genes. Details of the missense variants identified in the candidate regions for E. maxima resistance to primary infection are presented in Supplementary Table S5.
In total, 2,165 SNV were detected in the candidate region on chromosome 1 for the response to heterologous secondary E. maxima W challenge. Most of the identified SNV (95%) were located in intronic, upstream and downstream regions; 5% were located in exonic regions, mostly in 3 and 5 UTR regions. Measured traits -parasite replication per host genome (PR), Lesion Score (LS), and serum interleukin-10 (IL-10). Details provided: Affymetrix marker identifier; chromosome and position of markers in the Gal-gal5 assembly (Chr:mb); the additive genetic effect (G A ) and significance values (P-value).
Nevertheless, three genes (PMCH, TBXAS1, THL3) containing missense variants with moderate effects as well as 3 /5 UTR variants were detected. Details of the missense variants identified in the candidate regions for heterologous secondary E. maxima W challenge are presented in Supplementary Table S6.
Pathway, Network, and Functional Enrichment Analyses
The analyses for resistance to primary E. maxima infection revealed pathway enrichment for immune response involvement, including IL-10, interleukin-6 (IL-6), nuclear factor kappa-lightchain-enhancer of activated B cells (NF-κb) and toll like receptor signaling (Figure 5). Using the list of candidate region genes, two networks were constructed, comprising molecular interactions related to inflammatory response and disease, cell death and survival, cellular compromise, and cell cycle (IPA scores = 25; Figures 6A,B). A single enriched cluster was found, related to immune response linked to interleukin-1 (IL-1), Toll/IL1 response and cytokine-cytokine receptor response (ES = 2.2, with IL1R1, IL1RL1, IL2R, IL19R18, PTPRM, and COL14A genes involved). The pathway analyses for response to heterologous E. maxima W strain secondary challenge revealed enrichment for both immune (prostanoid biosynthesis, retinoic acid mediated apoptosis signaling, eicosanoid signaling) and metabolic pathways (Figure 7). Two gene networks were constructed, related to cell signaling, nucleic acid metabolism and small molecule biochemistry (IPA score = 20), and cellular development, tissue development and function (IPA score = 45), respectively (Figures 8A,B). Accompanying functional annotation clustering analysis revealed the presence of two enriched clusters related to cell to cell signaling (ES = 1.7) and metal-ion binding (ES = 1.3).
DISCUSSION
Coccidiosis remains one of the costliest diseases for the international poultry industry. Selectively breeding chickens for enhanced resistance to Eimeria challenge, and for improved breadth of vaccine response, could provide a tractable strategy to improve coccidiosis control. We conducted two studies using different crosses between the White Leghorn inbred lines 15I and C.B12. Our data confirm that line 15I birds are more susceptible to primary infection with E. maxima than line C.B12 by overall PR (Smith et al., 2002;Blake et al., 2006). While the two inbred lines exhibit similar resistance/susceptibility profiles following primary infection with either of the two antigenically distinct E. maxima strains, they show radically different levels of protection against heterologous secondary challenge by antigenically distinct strains of the same pathogen (Smith et al., 2002). We therefore investigated the genetic background of resistance to primary and heterologous secondary E. maxima W challenges.
The resistance of chickens to Eimeria infection has traditionally been quantified using measures such as oocyst output and LS, indicating resistance to PR and parasiteinduced pathology, respectively. For the former, the fewer oocysts excreted, the more resistant the chicken. Thus, oocyst shedding is considered to be an indicative trait and an accurate phenotype for calculating resistance to primary infection and subsequent parasite challenges and this method was used in the intercross experiment. However, calculation of oocyst output by fecal flotation and microscopy is labor intensive. Thus, quantitative real-time PCR for parasite genome copies in intestinal tissues was used as an alternative measure of PR in the more recent backcross experiment (Blake et al., 2006). A third trait, serum IL-10, was also quantified for these latter chickens, providing a measure of the innate immune response to Eimeria infection (Rothwell et al., 2004;Boulton et al., 2018). IL-10 is produced after E. maxima and E. tenella primary infection of White Leghorn chickens (lines 15I and C.B12) and E. tenella primary infection of commercial broilers (Rothwell et al., 2004;Wu et al., 2016;Boulton et al., 2018). In all these cases, IL-10 was expressed at high levels in infected birds only, and significantly correlated with pathology (lesion scores). Here, GWAS from the backcross experiment identified markers associated with IL-10 that exhibit significant additive genetic variance. These findings, in conjunction with indications that IL-10 is correlated significantly with gross pathology in a commercial population primary infection with E. tenella (Boulton et al., 2018), support the use of IL-10 as an accessible early-life biomarker in breeding programs aiming to enhance Eimeria resistance to challenge or pathological outcomes.
Although the significance of E. maxima in field coccidiosis has been recognized for many years, there has been a limited number of genetic studies investigating host resistance to E. maxima primary infection and challenge. A recent study that investigated the genetic background of resistance to high-level E. maxima infection using the same HD genotyping array but measuring three different phenotypes (body weight gain, plasma coloration, and β2-globulin in blood plasma) identified several QTL on chromosomes 1, 2, 3, 5, and 10 in commercial Cobb500 broilers (Hamzic et al., 2015). Similar to our findings, Hamzic et al. (2015) found no QTL overlap among their different phenotypes. Interestingly, QTL identified by Hamzic et al. (2015) on chromosome 1 for β2-globulin in blood plasma is nearby (2 Mb difference) QTL found in our study linked to for resistance to heterologous secondary E. maxima W challenge. Similar enriched biological pathways related to innate immune responses and metabolic processes were also detected in the two studies with this parasite species.
In other comparable work, Zhu et al. (2003) performed a linkage analysis study investigating chicken resistance in terms of oocyst output following controlled E. maxima infection using an F2-intercross between two broiler lines with different susceptibility to primary E. maxima infection. Using 119 microsatellite markers one locus associated with E. maxima resistance was identified on chromosome 1 (Zhu et al., 2003). Expanding this work, Kim et al. (2006) used nine microsatellite markers located on chromosome 1 to refine this region.
According to their results, the peak of QTL was located a considerable genetic distance (i.e., 254 cM) away from the chromosome 1 QTL identified here and in the Hamzic et al. (2015) study. This could be attributed to the use of different chicken lines, E. maxima strains, analysis methods, and/or genotyping tools. It is worth mentioning that the power to detect QTL as well as the resolution of their location using a few microsatellites is limited compared to HD genotyping platforms.
Comparison of the re-sequencing data of the two parental chicken lines identified a small number of genes that differ regarding the presence of exonic variants with a putative functional effect on the encoded proteins. Two genes of interest with missense variants located in the candidate regions for resistance to E. maxima primary infection encode Phosducin Like 3 (PDCL3) and TBC1 Domain Family Member 8 (TBC1D8) proteins. These immune-related genes were included in the two networks related to inflammatory response, and cell death and survival, constructed by IPA. PDCL3 acts as a chaperone for the angiogenic vascular endothelial growth factor receptor, controlling its abundance and inhibiting its ubiquitination and degradation, and also modulating activation of caspases during apoptosis (Wilkinson et al., 2004;Srinivasan et al., 2013). TBC1D8 is involved in the regulation of cell proliferation, calcium ion transportation, and also has GTPase activator activity (Ishibashi et al., 2009).
The genes encoding Thromboxane A Synthase 1 (TBXAS1) and Pro-Melanin Concentrating Hormone (PMCH) are located in the candidate region and are of interest in resistance to secondary challenge by heterologous E. maxima W. TBXAS1 encodes a member of the cytochrome P450 superfamily of enzymes involved in both immune response and metabolism; it plays a role in drug metabolism, platelet activation and metabolism, and synthesis of cholesterol, steroids, and other lipids (Yokoyama et al., 1991;Miyata et al., 1994). The proinflammatory actions of thromboxane receptors have been demonstrated to enhance cellular immune responses in a mouse model (Thomas et al., 2003). PMCH encodes a preproprotein that is proteolytically processed to generate multiple protein products, including melanin-concentrating hormone (MCH) that stimulates hunger and may additionally regulate energy homeostasis, reproductive function, and sleep (Viale et al., 1997;Chagnon et al., 2007). In a further mouse model, MCH has also been reported as a mediator of intestinal inflammation (Kokkotou et al., 2008). Although, the genes mentioned above are good functional candidates for resistance to primary infection and heterologous challenge with E. maxima, further studies are needed to confirm the present results and identify the actual causative genes and mutations.
The immune interactions between an intracellular pathogen and a host are complex and vary as a consequence of the survival mechanisms that have evolved in both (Blake et al., 2011;Blake and Tomley, 2014). It has been suggested that host control of challenge with Eimeria, an obligate intracellular pathogen, requires a strong inflammatory, mostly cell mediated response Dalloul and Lillehoj, 2006). Also, host innate immune responses have been detected during initial pathogen exposure in several studies (Kim et al., 2008;Pinard-van der Laan et al., 2009;Wu et al., 2016;Boulton et al., 2018). According to our findings, several gene networks and pathways relating to innate, humoral and cell-mediated, immune responses were highlighted from the gene products located in the candidate regions for resistance to primary Eimeria infection. Among the canonical pathways, IL-10 signaling was the most significant, with relevance as a regulator of cytokines such as interferon-(IFN-) γ. These findings agree with previous studies of Eimeria resistance that have highlighted IFN γ and tumor necrosis factor (TNF) nodes as crucial (Pinard-Van Der Laan et al., 1998;Smith and Hayday, 2000a,b;Bacciu et al., 2014), since IL-10 downregulates IFNγ production (Schaefer et al., 2009).
CONCLUSION
We identified genomic regions, putative candidate genes, canonical pathways and networks involved in the underlying molecular mechanisms of chicken resistance to E. maxima primary infection and to secondary heterologous E. maxima strain challenge. More emphasis should be placed on the relevant mechanisms for disease resistance, the response to secondary heterologous strain challenge and the role of IL-10 induction in immune responses to intestinal challenge in the future selective breeding of chickens.
AVAILABILITY OF SUPPORTING DATA
The resequencing data used in this study is available in NCBI dbSNP at the following web page: http://www.ncbi.nlm.nih.gov/ SNP/snp_viewBatch.cgi?sbid=1062063.
AUTHOR CONTRIBUTIONS
AS, PK, SB, FT, and DB devised the overall strategy and obtained funding. PK, SB, FT, and DB conceived the backcross experiments. PMH and KB devised the backcross breeding. MN managed the backcross trials and performed qPCR and DNA extraction assisted by KH and KB. Backcross phenotype collection was carried out by MN, KH and KB, while DB scored lesions. ZW performed IL-10 assays assisted by KB. KB prepared backcross DNA for genotyping and carried out all backcross analyses with input from AP, VR, and OM. AS designed the intercross trials with input from NB and these were carried out by PH and AA. AP performed an initial analysis of the intercross data with input from OM and KB. Pathway and resequencing analyses were performed by AP and KB. The manuscript was drafted by KB and AP with input from all other authors except PMH, SB, NB, and PK. AS, DB, FT, DH, and AP assisted in the interpretation of results.
FUNDING
The backcross work was funded by the BBSRC through the Animal Research Club (ARC) program under grants BB/L004046 and BB/L004003, while DEFRA OD0534 and BBSRC BB/E01089X/1 funded the intercross study. | 8,297.8 | 2018-11-26T00:00:00.000 | [
"Biology"
] |
ESG criteria and the credit risk of corporate bond portfolios
Demand for sustainable fixed-income investment solutions is surging but there is hardly research on the impact of sustainability on the risk characteristics of fixed-income portfolios. This study examines the impact of sustainability on the credit risk exposure of corporate bond portfolios between 2013 and 2020 by analyzing the returns of sustainable and non-sustainable portfolios using two different asset pricing models and environmental, social, and governance (ESG) ratings from different providers. Controlling for a set of portfolio characteristics, our results show that sustainable portfolios are significantly less exposed to credit risk than their non-sustainable peer portfolios. This finding implies that considering ESG criteria in portfolio management is a suitable means to systematically manage credit risk. Being the first study to investigate the relationship between sustainability and credit risk on a portfolio level, this study contributes to the understanding of the effects of ESG criteria in portfolio management and provides academics and investment professionals with valuable insights.
Introduction
Investment solutions considering environmental, social, and governance (ESG) criteria, so-called sustainable investments, have experienced substantial demand and supply side growth in recent years.According to the Global Sustainable Investment Alliance (GSIA), the volume of sustainable investments in Europe, the USA, Canada, Australia/New Zealand, and Japan increased from 22.84 trillion U.S. dollars at the beginning of 2016 to 35.30 trillion U.S. dollars at the beginning of 2020, representing an increase of 65% (GSIA 2020).According to these numbers, sustainable investments account for more than one third of global assets under management (Boston Consulting Group 2020; GSIA 2020).
The growth of sustainable investments is accompanied by a large number of studies that examine the effect of ESG criteria on the risk-return profile of investments (Friede et al. 2015;Wallis and Klein 2015).The vast majority of those studies focus on listed equity, although fixed-income instruments' share is almost 40% of all sustainable investments (GSIA 2018).Furthermore, studies analyzing the impact of ESG criteria on the risk-return profile of fixed-income instruments focus on (i) (abnormal) returns of sustainable mutual bond funds (Derwall and Koedijk 2009; Henke 2016) (ii) (abnormal) returns of synthetic bond portfolios (Pereira et al. 2019;Polbennikov et al. 2016) or (iii) credit risk at company-level (Graham and Maher 2006;Klock et al. 2005;Oikonomou et al. 2014;Stellner et al. 2015).However, to the best of our knowledge, there is no research which systematically investigates the effects of ESG criteria on corporate bond portfolios' credit risk.
This study aspires to fill this research gap by analyzing whether ESG criteria are a means to manage the credit risk of bond portfolios.To do so, we compare the credit risk exposures of sustainable and non-sustainable bond portfolios using the two-factor model by Fama and French (1993) and a four-factor model by Elton et al. (1995).Our analysis covers approximately 5000 corporate bonds and the 2013-2020 period.We derive the portfolio constituents for the sustainable (non-sustainable) portfolio based on an ESG-best-in-class (worst-in-class) approach with varying ESG score percentile thresholds.Thus, the sustainable portfolio consists of bonds with the highest ESG scores, while the non-sustainable portfolio consists of bonds with the lowest ESG scores.We control for industry and regional allocation, option-adjusted duration, and credit rating to minimize the impact of other bond characteristics on the portfolios' credit risk exposure.As proposed by Chatterji et al. (2016), we conduct our analysis using ESG ratings from two different ESG rating providers to control for differences in ESG ratings across different providers.
We find a significantly adverse relation between the bond portfolios' sustainability and their credit risk.Furthermore, a clear relationship between the level of reduction in credit risk and the severity of the ESG-best-in-class (worst-in-class) approach indicates that ESG criteria can be used to systematically manage the credit risk of a bond portfolio.These findings are robust to the use of different asset pricing models and ESG ratings.
The contributions of this study are threefold.First, it is the first study which systematically investigates the effect of ESG criteria on the credit risk of corporate bond portfolios.Hence, this study adds a new perspective to the risk-return debate inherent to sustainable investments.It clearly shows that the consideration of ESG criteria in the portfolio allocation of bonds can significantly reduce a portfolio's systematic credit risk.Second, it also provides important insights into the usability of ESG ratings.While a large body of literature indicates that (some) ESG ratings are subject to various biases, such as the sustainability rating agencies' understanding of sustainability (Berg et al. 2022;Chatterji et al. 2016;Dimson et al. 2020;Dorfleitner et al. 2015) or the size (Drempetic et al. 2020), the rewriting history (Berg et al. 2020), a mean reverse (Gidwani 2020), and a quantity bias (Chen et al. 2021), the results of this study suggest that ESG ratings are still a suitable means for managing bond portfolios' credit risk.Third, the findings of this study support investment professionals in finding an efficient solution to effectively manage credit risks in their corporate bond portfolios by integrating ESG criteria into their investment process.
The remainder of this study is structured as follows: The next section briefly reviews the related literature.The section is followed by the description of the data and methodology of the empirical analysis.We then present the results and highlight the main findings of the analysis, followed by a confirmation of the results using different robustness checks.Finally, this paper concludes with a summary of the key findings and gives an outlook of further research needs.
Literature review
Research that assumes a link between sustainability and credit risk usually argues with one of the two following theories.First, the risk mitigation view (Goss and Roberts 2011) states that companies with a superior sustainability performance are less risky, since they are, e.g., less prone to reputational losses induced by negative incidents.Second, the overinvestment view (Goss and Roberts 2011) regards investments in sustainability as a waste of scarce resources, which increases fixed costs and earnings' volatility and thus companies' credit risk (e.g., Frooman, Zietsma and McKnight 2008).Literature that deals with the effect of ESG criteria on the risk-return profile of fixed-income instruments primarily focuses on (i) (abnormal) returns of sustainable mutual bond funds, (ii) (abnormal) returns of synthetic bond portfolios or iii) credit risk at company-level.Thus, there is no literature that systematically analyzes the effect of ESG criteria on the risk characteristics of bond portfolios.
Concerning returns of sustainable mutual bond funds, Derwall and Koedijk (2009) measure the performance of ESG fixed-income funds in the U.S. for the 1987 to 2003 period by using multi-index performance evaluation models.They find that the performance of average ESG bond funds does not differ compared to conventional bond funds.Henke (2016) uses a five-factor model to explain returns of bond funds, showing that ESG bond funds in the U.S. and the Eurozone outperform conventional funds during the 2001 to 2014 period.This finding especially occurs during recessions or bear market periods.He attributes the outperformance to a systematic effect of social screening on financial performance caused by the hypothesized lower risk of high-ESG companies in combination with ESG bond funds' exclusion of irresponsible corporate bond issuers.Leite and Cortez (2018) investigate the performance of ESG bond funds domiciled in France and Germany from 2002 to 2014 using conditional multifactor models that allow for both time-varying risk and performance.They find that ESG bond funds significantly outperform their conventional counterparts, which might be attributable to the government bonds and not to the corporate bonds in the portfolios.Madhavan and Sobczyk (2020) find a strong negative relation between a fund's total return and its holding-based ESG score for an active U.S bond fund sample in the 2015 to 2020 period.The authors explain this observation by the fact that funds with higher ESG scores have an exposure to high quality bonds, which are less volatile.
Considering the return patterns of synthetic bond portfolios, Polbennikov et al. (2016) show that portfolios consisting of high-ESG bonds have a slightly higher performance than their low-ESG counterparts after controlling for systematic risk factors such as duration, spread, and DTS1 by sector.Moreover, high-ESG bond portfolios have lower credit spreads on average.The first finding indicates the existence of a sustainability premium, which can explain a certain fraction of bond portfolio returns beyond the used systematic risk factors.Their second finding adds to the current academic discussion regarding the impact of sustainability on credit spreads.However, even though Polbennikov et al. (2016) account for systematic risk factors in their return attribution, they do not analyze possible differences in the exposure to systematic factors.Pereira et al. (2019) investigate the performance of high-and low-ESG bond portfolios of 189 Eurozone companies between 2003 and 2016 using a conditional multifactor model.They conclude that high-ESG bond portfolios outperformed at an early stage, with the outperformance disappearing over time.Moreover, their regression results indicate that high-ESG bond portfolios are significantly more exposed to credit risk than low-ESG bond portfolios.Pereira et al. (2019) try to explain their finding by arguing that since low credit rated bonds present high yields, issuers of speculative grade bonds can benefit the most in absolute terms from reductions in the cost of debt that may result from considering ESG criteria in business practices.Building active investment grade, high yield, and emerging market bond portfolios, Bahra and Thukral (2020) conclude that ESG scores can be used to enhance portfolio outcomes via lower drawdowns, reduced portfolio volatility, and, in some cases, even marginally increased risk-adjusted returns.
Finally, there is a large body of literature investigating the potential effects of the integration of ESG criteria in business activities on credit ratings, credit risk premiums, and the costs of debt of companies.The results are ambiguous.Some studies suggest better credit ratings, i.e., lower risks and costs of debt, for high-ESG companies (Bauer and Hann 2010;Bhojraj and Sengupta 2003;Chiesa et al. 2021;Oikonomou et al. 2014;Stellner et al. 2015).In contrast, other investigations point in the opposite direction (Izzo and Magnanelli 2012;Menz 2010).
Methodology
We investigate the impact of ESG criteria on the credit risk exposure of corporate bond portfolios using the following two-factor approach introduced by Fama and French (1993): This model explains excess returns ( r P,t ) of, in our case, a bond portfolio P with a term ( TERM t ) and a credit risk ( DEF t ) factor in month t.TERM t represents the return result- ing from price changes and the carry of treasury bonds, hence, capturing interest-rate risk.The factor return is calculated using the monthly return of a long-term U.S. treasury index2 in excess of the one-month U.S. treasury bill rate.The monthly U.S. treasury bill rate proxies for a risk-free investment opportunity.DEF t explains the part of the return that is attributable to credit spreads, i.e., DEF t captures credit risk.The monthly factor return is calculated by subtracting the long-term U.S. treasury index from long-term U.S. corporate bond index3 .P,t denotes the error term.Despite being a rather simple model, Fama and French (1993) demonstrate the strong explanatory power of these two factors in the variation of investment-grade corporate bond portfolio returns.Hence, we use this straightforward approach due to its simplicity to analyze the differences in the systematic credit risk of sustainable and non-sustainable corporate bond portfolios by regressing the time series of different portfolios returns as well as the long-short portfolio returns for each level of sustainability.We test the robustness of our results by using (i) ESG scores from two different data providers and (ii) a more sophisticated four-factor model to explain the portfolio returns.
Portfolio construction
We derive our initial bond universe using the yearly constituents from the "Bloomberg Barclays US Agg Corporate Bond Index" 4 , during the analyzed period from 2013 to 2020 covering 12,592 corporate bonds in total.The index comprises corporate bonds denominated in U.S. dollar.Bonds included in the index are restricted to a time to maturity of at least one year and a credit rating equal to investment grade.We exclude bonds without an ESG rating.The index (1) r P,t = + 1 * TERM t + 2 * DEF t + P,t provides an excellent foundation due to its broad coverage and its representativeness of the U.S. dollar bond market.We consider all bonds with a sustainability rating from MSCI ESG Research 5 .
We group the bonds according to their industry sector, region of risk, credit rating 6 , and duration using Bloomberg data.The grouping procedure enables us to minimize the influence from differences in exposure to systematic risk factors.Table 1 describes the variables and the possible characteristics used in the grouping procedure: We first rank the bonds within each of the resulting 1050 groups with respect to their MSCI ESG score.Bonds with an ESG score equal to or higher than the 12.5% percentile (25% and 50% percentile, respectively) within each group are assigned to the High ESG portfolio, and conversely, bonds with scores equal to or lower than the 87.5% percentile (75% and 50% percentile, respectively) are assigned to the Low ESG portfolio.This procedure yields three pairs of portfolios with each pair consisting of one sustainable and one non-sustainable portfolio. 7We account for changes in a bond's option-adjusted duration, companies' credit rating, and their sustainability performance (proxied by the ESG score) by revising the constituents of each portfolio in December of each year.All bonds within a portfolio are assigned an equal weighting.
Systematically investigating the impact of ESG criteria on credit risk using a portfolio approach rather than focusing on individual bonds comes with several advantages.First, the portfolio approach reduces bond specific idiosyncratic factors.Thus, it allows to gain more precise estimations of the exposure to the credit risk factor and, ultimately, the impact of ESG criteria and credit risk.Second, it enables us to differentiate between different levels of sustainability.And third, the findings are better tailored to investors.
Descriptive statistics
Table 2 provides statistics on High (Low) ESG portfolios' yearly credit ratings and their option-adjusted duration based on the 12.5% ESG score percentile threshold for which the likelihood of significant differences in these characteristics is the highest.However, we expect differences in both characteristics to disappear with respect to our grouping procedure which allows us to assign differences in the High and Low ESG portfolios' exposure directly to the differences in their ESG criteria.The descriptive analysis of the two characteristics at the portfolio level supports our expectation.We find consistently small, if any, differences between the High and Low ESG portfolios' credit ratings (Panel A) and the option-adjusted duration (Panel B) with respect to their yearly median (column 3) and mean (column 4).Using a nonparametric Wilcoxon test (column 7) and a two-sided t test (column 8), we can confirm that those small differences are not statistically significant.We conclude that High and Low ESG portfolios do not differ with respect to their average credit rating (Panel A) and to the average optionadjusted duration (Panel B) in any of the given years.Thus, the grouping process is successful and differences in the portfolios' credit risk can be attributed to differences in the degree of sustainability.
Results
Table 3 presents the results of the Fama and French (1993) two-factor model regressions for the High and Low ESG corporate bond portfolios.In addition to the regression This exhibit describes the variables that are used for the grouping procedure.Column 1 displays the name of each characteristic, column 2 contains a short description, and column 3 lists the possible characteristics for each variable.The grouping procedure aims to minimize differences between the sustainable and the non-sustainable bond portfolio that is related to credit risk
Factor Description Characteristics
Industry sector According to the Bloomberg Industry Sector Classification System (BICS).
Basic Materials, Communication, Consumer cyclical, Consumer non-cyclical, Energy, Financial, Industrial, Technology, Utility Region of risk The region of the country to which the company has the biggest business risk exposure (country of risk).
Africa/ Middle East, Asia Pacific, North America, South and Central America, Western Europe Credit rating The worst credit rating for the bond from S&P, Moody's, and Fitch.
Better than AA−, from A+ to A−, worse than BBB+
Duration
The option-adjusted duration (OAD) of a bond.Below 3%, 3-5%, 5-7%, 7-10%, 10-15%, 15-20%, above 20% results for the return time series of the respective portfolios, return and risk factor differences between both portfolios are analyzed by means of a difference portfolio, going long in the Low and short in the High ESG portfolio ("Low-High").We observe that the explanatory power of the Fama and French (1993) two-factor model is quite high accounting for 93.4% to 97.4% in the Low and High ESG portfolios' return variation.Unsurprisingly, we find significantly positive loadings on TERM, indicating a statistically significant interest-rate risk exposure in both portfolio types.Since the coefficient of the TERM factor does not significantly differ from zero in any of the Low-High difference portfolios, there is, however, no evidence for differences in interest-rate-risk exposure between the two portfolios.This finding again reflects a successful grouping of bonds before the regression.Furthermore, all Low and High ESG portfolios load significantly positive on the credit risk (DEF) factor.We also find a significantly This exhibit provides the minimum (Min), the 25% percentile (25%).the median, the mean, the 75% percentile (75%), and the maximum (Max) value of the credit ratings (Panel A) and the option-adjusted duration (Panel B) for the portfolios built on the MSCI 12.5% ESG score percentile threshold.Credit ratings are translated into numeric values range from 2 (lowest credit rating) to 11 (highest credit rating).To test for potential differences between the sustainable (High ESG) and the non-sustainable (Low ESG) portfolio.We apply a Wilcoxon rank-sum test and a two-sided t-test on the yearly mean values.The p-values of both tests are presented in column 7 and 8 Min. (1) 25% ( 2) Median (3) Mean (4) 75% ( 5) Max. ( 6 positive coefficient for DEF in each of the three Low-High ESG portfolios, implying that Low ESG portfolios have a significantly higher credit risk than High ESG portfolios.Interestingly, the magnitude of DEF coefficients of the difference portfolio regressions is adversely related to the ESG score percentile threshold, raising from 0.049 with a 50% ESG score percentile threshold to 0.164 with a 12.5% ESG score percentile threshold.Thus, the difference in the exposure to credit risk between Low and High ESG portfolios raises with the differences in the average ESG scores suggesting that a portfolio's overall credit risk significantly relates to its ESG performance.The more sustainable a corporate bond portfolio is, the lower its credit risk on average.This finding suggests that corporate bond portfolios' credit risk can be purposefully managed by considering ESG criteria in the investment process.We find evidence for a systemically lower exposure toward credit risk of ESG-best-in-class created sustainable corporate bond portfolios compared to their non-sustainable corporate bond peer portfolios.Hence, we can confirm that ESG score differences on the portfolio level are associated with a lower portfolio credit risk on average.Furthermore, the constants of our Low-High difference portfolios do not support a significant risk-adjusted outperformance of nonsustainable corporate bond portfolios compared to sustainable corporate bond portfolios.We can reject a performance penalty subject to the ESG-best-in-class approach.Based on these findings, we conclude that an ESG-best-in-class strategy can be used as an instrument to control the systematic credit risk exposure of a portfolio while the resulting ESG shift still allows to adequately diversify the sustainable portfolio.
Robustness checks
Our initial analysis has already revealed robust results for varying ESG score percentile thresholds.In addition to the application of those different thresholds, we run two further robustness checks integrating the findings of previous academic studies.Recent research, e.g., revealed inconsistencies between ESG ratings of different rating providers (Berg et al. 2022;Chatterji et al. 2016;Dorfleitner et al. 2015;Dumrose et al. 2022).Chatterji et al. (2016), therefore, propose using scores from different ESG rating providers to ensure the robustness of studies' results.Following this proposal, we replace MSCI ESG scores with Refinitiv ESG scores and re-run the Fama and French two-factor model under consideration of the aforementioned ESG score percentile thresholds.Table 4 presents the results using Refinitiv ESG scores.This analysis confirms the results of our initial analysis revealing that the exposure to the credit risk factor increases adversely to the ESG score percentile threshold in the Low-High ESG portfolios.While the differences in the credit risk are statistically significant for all Low-High ESG portfolios when using MSCI ESG ratings, we only observe a statistically significant difference for the 12.5% ESG score percentile threshold.These results, however, underpin our conclusion stating that the differences in the credit risk rises with the difference in the average ESG scores of the corporate bond portfolios.
In our second test, we replace the Fama and French (1993) two-factor model with a more sophisticated fourfactor model introduced by Elton et al. (1995).This model provides two major enhancements with respect to our findings.First, Elton et al. (1995) takes a slightly different approach in measuring the credit risk factor.The authors use the difference in returns from a U.S. high-yield corporate bond index and a U.S. intermediate treasury bond index as gauge for the systematic credit risk factor.And second, the four-factor model by Elton et al. (1995) covers not only systematic credit risk (DEF2) but also incorporates a risk factor related to the risk in the overall bond market (BOND), an optionality premium factor (OPTION) and a risk factor relating to the systematic equity risk (EQUITY) 8 .Therefore, we apply the following regression model: Table 5 presents the results of the four-factor model.We do not find a significant exposure toward the EQUITY factor for either the Low or High ESG portfolio.However, both portfolios are symmetrically exposed toward the BOND, DEF2, and OPTION factor.In line with our expectations, the significant factor exposure does not persist for the BOND and OPTION factor in the Low-High ESG portfolio.However, the results of the four-factor model support our previous findings regarding significant differences in the Low-High ESG corporate bond portfolios' exposure toward the credit risk factor.The regression coefficient is positive and statistically different from 0. A one percentage point increase in the return of the credit risk factor is associated with an increase in the Low-High ESG portfolio return by 0.13 percentage points on average.Thus, our finding of significantly different exposure toward the credit factor relating to large differences in portfolios' ESG scores can be confirmed using the more sophisticated Elton et al. (1995) model.
Conclusion
The aim of this study is to give academics and investment professionals important insights on the impact of sustainability on the systematic credit risk exposure of corporate bond portfolios.Controlling for several portfolio characteristics, our findings highlight that the implementation of an ESG-best-in-class strategy significantly affects the credit risk exposure without any performance or diversification penalty.There is a clear adverse relation between a corporate bond portfolio's sustainability and its credit risk: the higher the sustainability, the lower the credit risk.Hence, our results are consistent with the risk mitigation view (Goss and Roberts 2011) which states that companies with a superior sustainability performance are less risky.The findings of this study are robust to the usage of ESG ratings from different providers and different asset pricing models.
Our study further has implications for investment professionals.First, the consideration of ESG criteria in portfolio management is a suitable tool to systematically manage credit risk.Second, investment professionals have to account for the resulting bias in their portfolios to control their active risks.For instance, asset managers tracking a sustainable portfolio against a conventional benchmark need to overweight companies with lower credit ratings to compensate for the risk mitigating effect from higher sustainability to reduce systematic credit risk tracking error.This is the first academic study that analyzes the effect of incorporating an ESG-best-in-class approach on corporate bonds, which noticeably contributes to the current academic discussion.Further research could investigate the stated relationships for different fixed-income segments, like high yield or emerging market, because the demand from clients in these segments is rising and the impact of sustainability could be more pronounced due to higher information-asymmetries and lower regulations.
Table 1
Description of the Grouping Systematics for the Matching Procedure
Table 2
Descriptive Portfolio Statistics
Table 3
the Fama French (1993)ctor (1993)Model usingMSCI ESG DataThis table presents the regression coefficients of monthly OLS regressions usingthe Fama French (1993)two-factor model.Standard errors are adjusted per Newey-West and presented in parenthesis.Columns 1 to 3 provide the 50% ESG score percentile threshold, columns 4 to 6 provide the 25% ESG score percentile threshold, and columns 7 to 9 provide the 12.5% ESG score percentile threshold.Columns 1, 4, and 7 present the results for the non-sustainable (Low ESG) portfolios, columns 2, 5, and 8 present the results for the sustainable (High ESG) portfolios and columns 3, 6, and 9 present the results of a Low-High difference portfolios.The sample period is from 2013 to 2020.The data is derived from
Table 4
Fama and French (1993)Y t + 2 * BOND t + 3 * DEF2 t + 4 * OPTION t + P,tFama and French two Factor (1993)Model using Refinitiv ESG Data This table presents the regression coefficients of monthly OLS regressions using theFama and French (1993)two-factor model.Standard errors are adjusted per Newey-West and presented in parenthesis.Columns 1 to 3 provide the 50% ESG score percentile threshold, columns 4 to 6 provide the 25% ESG score percentile threshold, and columns 7 to 9 provide the 12.5% ESG score percentile threshold.Columns 1, 4, and 7 present the results for the non-sustainable (Low ESG) portfolios, columns 2, 5, and 8 present the results for the sustainable (High ESG) portfolios, and columns 3, 6, and 9 present the results of a Low-High differences portfolio with respect to each ESG score percentile threshold.The sample period is from 2013 to 2020.The data is derived from Bloomberg and Refinitiv
Table 5
Elton et al. (1995) Model 12.5% Cut-Off LevelThis table presents the coefficients of monthly OLS regressions using theElton et al. (1995)four-factor model.Standard errors are adjusted per Newey-West and presented in parenthesis.Column 1 presents the coefficients for the non-sustainable (Low ESG) portfolio, column 2 presents the coefficients for the sustainable (High ESG) portfolio, and column 3 presents the coefficients of a Low-High differences portfolio using a 12.5% ESG score percentile threshold.The sample period is from 2013 to 2020.The data is derived from Bloomberg and MSCI *, **, *** indicate significance at the 10%, 5% and 1% level, respectively | 6,050.8 | 2023-10-25T00:00:00.000 | [
"Business",
"Economics",
"Environmental Science"
] |
Exploring Computational Techniques in Preprocessing Neonatal Physiological Signals for Detecting Adverse Outcomes: Scoping Review
Background: Computational signal preprocessing is a prerequisite for developing data-driven predictive models for clinical decision support. Thus, identifying the best practices that adhere to clinical principles is critical to ensure transparency and reproducibility to drive clinical adoption. It further fosters reproducible, ethical, and reliable conduct of studies. This procedure is also crucial for setting up a software quality management system to ensure regulatory compliance in developing software as a medical device aimed at early preclinical detection of clinical deterioration. Objective: This scoping review focuses on the neonatal intensive care unit setting and summarizes the state-of-the-art computational methods used for preprocessing neonatal clinical physiological signals; these signals are used for the development of machine learning models to predict the risk of adverse outcomes. Methods: Five databases (PubMed, Web of Science, Scopus, IEEE, and ACM Digital Library) were searched using a combination of keywords and MeSH (Medical Subject Headings) terms. A total of 3585 papers from 2013 to January 2023 were identified based on the defined search terms and inclusion criteria. After removing duplicates, 2994 (83.51%) papers were screened by title and abstract, and 81 (0.03%) were selected for full-text review. Of these, 52 (64%) were eligible for inclusion in the detailed analysis. Results: Of the 52 articles reviewed, 24 (46%) studies focused on diagnostic models, while the remainder (n=28, 54%)
Background
Premature infants are those born at <37 weeks gestational age, ranging from extreme preterm (23 weeks' gestation) to late preterm (37 weeks' gestation), and are defined as having very low birth weight of <1500 g.These extremely premature infants have a higher risk of death, and surviving infants are highly prone to physical, cognitive, and emotional impairment [1].The patients usually have a long length of stay, ranging from <10 to >120 days [2], in the neonatal intensive care unit (NICU), where high-fidelity physiological changes are monitored to observe their health status and signs of deterioration.During this long length of stay, a large amount of data from infants are generated and not typically electronically aggregated for permanent storage [3].With the advent of electronic health records, relevant patient information is easily available for advanced data analytics that can be used to improve health outcomes.The records contain demographics, etiology, pathology, medication, and physiology information.Physiological changes are regularly monitored in preterm infants, notably, electrocardiogram (ECG), oxygen saturation (SpO 2 ), heart rate (HR), respiratory rate, arterial blood pressure, electroencephalography (EEG), and temperature.Some advanced centers around the world have started linking the information derived from the electronic health records data with the continuously monitored physiological information for permanent storage, more frequently in lower resolution, which facilitates various data analytics [4][5][6].Compared with intermittent assessment and review, continuous capturing and analysis of the physiological data from the standard bedside monitors allow for a better understanding of trends and have been shown to improve outcomes of infants in the NICU [5].
Clinical decision support systems (CDSSs) can integrate clinical and physiological information to provide automated support in patient care planning to facilitate the diagnostic process and therapy planning, generate critical alerts and reminders, and predict the risk of patient deterioration.CDSSs have the potential for a positive impact in improving clinical and economic measures in the health care system [7][8][9].The technological advancement that allowed storing big data, as well as the advancement of artificial intelligence (AI), has given rise to machine learning (ML)-and AI-based CDSSs aiming to build data-driven models to predict adverse outcomes in premature infants ahead of clinical diagnosis time [10][11][12].
The steps of building the ML pipeline to predict adverse outcomes involve several intermediate computational steps using the physiological data, of which data preprocessing is the first indispensable step.Namely, in the NICU, physiological signals are collected using a diverse range of devices, which introduce a number of artifacts such as environmental artifacts (eg, device connection failure, equipment noise, electrosurgical noise, and power line interferences); experimental or human error due to patient movement during data acquisition, incorrect or poor contact of the electrodes, and other contact noise; and artifacts due to muscle contraction, cardiac signals, and blinking [13,14].These noises distort signals and may adversely affect model generalization capability and predictive power [10].
Although recently much progress has been made in building ML models using neonatal physiological data, there are limitations in the detailed reporting of the preprocessing techniques of these signals [15], which in turn hinder the reproducibility of the methods and results.In AI-powered software as a medical device (SaMD), this is especially important as the implementation of a software quality management system (QMS) is only possible by following the best practices and adhering to relevant regulatory standards and guidelines for medical devices, such as ISO 13485, IEC 62304, and IEC 82304-1.Beyond market access considerations, the ongoing international discourse on the regulation of medical software is specifically concentrated on AI and ML.This focus is a response to their growing applications, demanding increased attention from regulatory bodies such as the Australian Therapeutic Goods Administration and the US Food and Drug Administration [16].Thus, it is crucial to adhere to a standardized protocol following clinical principles guided by domain experts and regulatory requirements while preprocessing the signals and reporting these techniques in detail; this ensures the reproducibility of the methods, allowing transparency in their clinical adoption.
Objectives
As the first step in bridging the gap in their reproducibility for clinical adoption, this review aims to identify studies that used computational methods to analyze premature infants' physiological signals for detecting adverse outcomes.The review describes different tools and techniques used to preprocess physiological signals and provides recommendations on what aspects need further details for the clinical adoption of the techniques.The remainder of the paper is organized as follows: the Methods section explains the detailed search and screening process, while the Results section begins with an overview of the reviewed studies, followed by a detailed analysis.The Discussion section highlights the key reporting patterns identified in this review along with their shortcomings and provides recommendations for transparent reporting of future studies as it allows for accurate reproduction of the results and makes them usable in the clinical setting [17].A summary of the work concludes the paper.
Screening and Study Selection
The initial screening of the databases led to 3585 papers.Of these, 590 (16.46%) papers were manually identified as duplicates and excluded from the analysis.One paper was identified as a duplicate by the automation tool and removed.The remaining 2994 (83.51%) papers were subjected to title and abstract screening using the Rayyan Intelligent Systematic Review application (Qatar Computing Research Institute) [20].
Several inclusion criteria were set to select papers for full-text review.The criteria are mentioned in Textbox 1.
After screening the titles and abstracts, 81 articles were selected for full-text review; 29 (36%) papers were excluded during this stage as they did not align with the inclusion criteria, leaving 52 (64%) papers eligible for detailed synthesis and analysis.
The title and abstract screening was done by 1 reviewer, while 2 reviewers independently checked for paper eligibility against the inclusion criteria at the full-text review stage.When both reviewers were not in agreement on any papers, a third reviewer assessed them to provide a final decision on the inclusion and exclusion of the papers.Data charting was done using Microsoft Excel, and the following variables were recorded in line with related review papers [10,21]: title, year, journal, authors, digital object identifier, data set, participant number, participant demographic, signals used, data set size, sample rate, other data (if applicable), outcome metric, device software, programming language, preprocessing methods, algorithms, other techniques, features, models, model type, results (quantified), and key findings.Data synthesis was done using a narrative approach by summarizing findings based on the similarities in the data sets and techniques used.The detailed search queries,
RenderX
bibliography files of all databases, all included papers, metadata of all papers and metadata of all papers included for full-text review are provided in Multimedia Appendices 1-5 .
•
Article type: articles must be peer-reviewed publications in a journal, conference, or workshop • Data: articles must conduct an analysis on premature human infant data; articles must use physiological responses in some form • Outcome: articles discuss applications relating to adverse neonatal outcomes such as mortality, length of stay, sepsis, necrotizing enterocolitis, intraventricular hemorrhage, hypoxic-ischemic encephalopathy, apnea, bradycardia, and other poor health outcomes, also known as morbidity.
The disease outcomes were chosen based on the commonly researched outcome metric using preterm infant data and the search terms used in McAdams et al [10] that investigated artificial intelligence and machine learning techniques used to predict clinical outcomes in the neonatal intensive care unit • Analysis: articles reported some form of computational techniques in their analysis
Overview of the Included Studies
Figure 1 shows the full process of database search and study selection using a PRISMA flow diagram.
As the studies were found to be heterogeneous in their study design and analysis techniques, a narrative approach was taken to summarize the studies and their key findings.The studies were grouped according to the homogeneity in terms of the data sets used and sorted by the publication year.This approach was inspired by the review article by Mann et al [78].
One of the noticeable patterns identified through the results reported in Table 2 is that the groups publishing studies using the same data set followed similar preprocessing techniques, although not at every step.For instance, studies using the ECG data from Cork University Maternity Hospital all used the same algorithm for QRS complex detection.However, they were diverse in their selection of filtering techniques and segmentation duration.Furthermore, they systematically failed to report detailed parameter settings for the QRS complex detection.While the approach of using similar preprocessing techniques helps maintain consistency to some extent, they do not confirm adhering to clinical practices identified from domain expert knowledge.
The QRS complex characteristics and RR intervals for neonates are different from those of adults and as such require an appropriate adjustment for QRS detection algorithms.This is a necessary first step for HR variability (HRV) analysis in neonates.However, a review published on neonatal HRV by Latremouille et al [15] revealed that given a lack of clear guidelines on neonatal vital signs and HRV analysis, several studies followed HRV analysis guidelines for adults published by the Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology [79].Our review found that 16 (44%) out of the 36 studies analyzing ECG signals used the Pan-Tompkins algorithms for QRS complex detection.The original implementation of the algorithm was based on the ECG characteristics of the adult population and therefore was preprocessed accordingly.Only 4 (25%) of those 16 studies reported adjustment of the original algorithm to adapt to neonates, of which only 2 provided specific modification details.In the absence of detailed reporting on the parameter settings, it is difficult to determine whether the settings adhered to neonatal waveform morphology.Incomplete reporting and lack of transparency hinder the understanding of the strengths and weaknesses of a study and limit its reproducibility and usability.Moreover, transparent and detailed reporting is required to confirm the adherence to regulatory compliance and is crucial for the clinical adoption of these methods.
Similar to the QRS complex in ECG signals, the acceptable ranges of physiological signals for neonates are also different from those of the adult population.This review found that no studies reviewed the acceptable ranges of the analyzed signals against any published guidelines, which could pose several limitations in the clinical adoption of the methods.This is consistent with another review looking into physiological vital sign ranges from 34 weeks gestational age, and it identified that several studies reported the means of vital signs instead of ranges, which makes the interpretation into clinical practice difficult [80].Here, we recommend clear reporting and the use of physiological signal ranges that are clinically validated through published studies and textbooks [81][82][83].ing features from both signals ond were discarded.ECG QRS complex detected using the Pan-Tompkins algorithm [85] to generate RR intervals.QRS complexes were filtered using a technique from Chazal et al [86] Bradycardia severity estimation accuracy was im-RR intervals from ECG were extracted using a 3-lead ECG at 500 Hz, respiration signal at 50 Hz Participants; n=10; data size: ~20-70 hours each; model: diagnostic; outcome metric: bradycardia Gee et al [26], 2016 PICS e database [25,88] proved by an average of 11% using a point process model of heart rate and respiration modified Pan-Tompkins algorithm (modification details NR).
Analysis was done on a 3-minute window before each bradycardia.
No processing was reported for respiration signals analyzed
Study settings Author, year Data set used
A point process model-based prediction algorithm achieved a mean AUROC f of 0.79 for >440 bradycardic events and was able to predict bradycardic events on an average of 116 seconds before onset (FPR g =0.15) RR intervals from ECG were extracted using a modified Pan-Tompkins algorithm (modification details NR).The artifacts, due to movement, disconnection, or erroneous peaks, were removed by visual inspection.No processing was reported for respiration signals.Additional analysis on the frequency content of the RR time series was done using Morlet wavelet transform [89] 3-lead ECG at 500 Hz, respiration signal at 50 Hz Participants, 10; data size: ~20-70 hours each; model: prognostic (+116 seconds); outcome metric: bradycardia Gee et al [25], 2017 PICS Nonparametric modeling using kernel density estimation achieved a 5% false alarm rate in predicting the onset of bradycardia events Baseline wander was removed using a highpass filter with a cutoff frequency between 0.5 and 0.6 Hz.Motion and disconnection artifacts were removed by visual inspection.QRS complexes were detected using Pan-Tompkins algorithm [85].Signals were segmented 5 minutes before and 2 minutes after a bradycardic event 3-lead ECG at 500 Hz Participants; n=10; data size: ~20-70 hours each; model: prognostic (time NR); outcome metric: bradycardia Das et al [27], 2019 PICS Time and frequency domain features were extracted.An extreme gradient boosting model achieved an average AUROC of 0.867.HRV h results showed a significant variation between a healthy infant and an infant prone to bradycardia QRS complex was detected using an algorithm (NR).RR intervals were calculated from the detected peaks 3-lead ECG at 500 Hz Participants; n=11; data size: ~20-70 hours each for 10 and 10 weeks for 1 participant; model: prognostic (time NR); outcome metric: bradycardia Mahmud et al [28] The same preprocessing techniques as reported in Ghahjaverestan et al [35].QRS complexes were identified using Pan-Tompkins method [85].The RR time series were uniformly upsampled to 10 Classification using a combination of all features and logistic regression model reached a mean accuracy of 0.79 (SD 0.12) and mean precision of 0.82 (SD 0.18), 3 hours before the onset of sepsis QRS complexes from ECG were extracted using a DT-CWT-based method described the same as Joshi et al [45].CI signal was filtered to remove cardiac artifacts, and peaks were detected using methods similar to those in previous works (NR).Features were extracted from every 1-hour signal ECG at 250 Hz, CI at 62.5 Hz Participants; n=64; data size: NR; model: prognostic (+3 hours); outcome metric: sepsis Cabrera-Quiros et al [47], 2021 Máxima medical Center NICU 47 features were extracted from the vitals.A logistic regression model achieved 0.9 AUROC in detecting central apnea QRS complexes were detected using the same method as reported in Joshi et al [45] and Cabrera-Quiros et al [47].From ECG, SII ad was calculated by applying a bandpass filter (0.001-0.40 Hz) using 10-second segments and then computing a kernel density estimate to return patient motion measurement every second.RR intervals were resampled at 250 Hz.CI signal was processed using the method by Redmond et al [99] QRS complexes from ECG were extracted using a DT-CWT-based method described in [95].CI signal was filtered to remove cardiac artifacts (method NR).Peaks were detected using the method reported by Lee et al [96].SII was calculated from ECG and CI waveforms using a CWT-based method, as reported by Zuzarte et al [100] The proposed framework provided real-time analysis and HRV extraction to identify the characteristics correlated to periods of high distress or pain Pan-Tompkins algorithm [85] was modified to detect QRS complexes.ECG was filtered using a bandpass filter with a 16-26-Hz cutoff frequency.A low-pass filter by an order 120 FIR aj filter with a corner frequency of 25 Hz and a highpass filter by an order 160 FIR filter with a corner frequency of 25 Hz were applied.Then, a polynomial filter of order 21 was applied as the differentiator filter.Finally, a 111-order moving average filter was used, and QRS complex was detected using an adaptive threshold.Lomb-Scargle LMS ak spectral estimation [102]
Handling of Missing Data
During neonatal physiological monitoring, instances of missing data may arise due to sensor disconnection, improper placements, or signal dropouts.To tackle this issue, methodologies like data imputation or interpolation are applied.For example, if gaps exist in a neonate's HR monitoring data, interpolation methods can estimate the missing values by considering neighboring data points.Widely used interpolation techniques include linear interpolation, spline interpolation, and time-based interpolation.In addition, common data imputation methods involve forward fill, backward fill, and imputation using mean or median values.Methods such as forward fill [30], moving average [44], mean imputation [64,66], and interpolation [67] were used by some studies reviewed in this paper.
Artifact Removal
Neonatal signals can be affected by artifacts, such as those from muscle movements or electrical interference.Commonly used techniques, such as bandpass or notch filters, along with moving averages, are used to effectively eliminate these disturbances.For instance, in neonatal EEG signals, adaptive filters prove beneficial in eliminating artifacts caused by muscle movements, resulting in a clearer representation of the baby's brain activity.Some methods used by the reviewed papers were high-pass filter [27,46] bandpass filter [29,33,44,45,56].
Overview
Resampling is a technique that standardizes data intervals, involving either upsampling (increasing data point frequency) or downsampling (decreasing frequency) to create a regular time series.This aligns signals from different devices or physiological sources.Normalization ensures uniformity and reliability across these standardized sampling rates.For instance, if neonatal HR signals from different devices have varied sampling rates, resampling achieves a common rate, while normalization, using techniques such as minimum-maximum, z score, or log scale, ensures consistent amplitude scaling for accurate comparative analysis.In the reviewed studies, normalization techniques such as minimum-maximum [53] and 0 mean normalization [29,59] were used.In terms of resampling, both downsampling [33,34,41] and upsampling [39] techniques were used.
Waveform Feature Extraction
Extracting relevant features from a signal's waveform is a fundamental step in signal preprocessing.This involves identifying key characteristics such as peaks, troughs, or other significant points in the signal.In the context of neonatal ECG, feature extraction may involve identifying key points such as R-peaks to analyze HRV, providing valuable insights into the infant's autonomic nervous system development.The Pan-Tompkins algorithm is a popular method chosen by multiple papers reviewed in this study that conducted R-peak detection from the QRS complex [22,24,27,33,35,39].
Data Segmentation
Segmenting data is the process of breaking down a continuous signal into smaller, more manageable sections to enable targeted analysis.This practice is especially beneficial when dealing with lengthy signals.Data segmentation is a common preprocessing step in ML workflows.For instance, in the analysis of neonatal sleep patterns using EEG, data segmentation can involve dividing the continuous EEG signal into epochs, allowing for the identification and study of sleep stages in shorter, more manageable segments.Commonly used segmentation techniques include fixed length, sliding window, and threshold-and feature-based segmentation.Some of the data segmentation sizes used in the reviewed studies were 30-second [22][23][24]45] and 1-minute [41] epochs and a sliding window of varied sizes [35,40,55,59,64].
RenderX
In neonatal physiological signal processing, these preprocessing techniques contribute to the accurate interpretation of signals, aiding health care professionals in monitoring and providing appropriate care in the NICU or other clinical settings.
It can be seen from Table 3 that only 7 (13%) out of the 52 reviewed studies reported all the recommended preprocessing steps.This could have several impacts on the downstream analysis.For instance, several papers missed reporting on how they segmented the data for feature extraction and classification, although it is essential for clinical validation in cases where the segment duration is dependent on the adverse outcome prediction performance.In HRV analysis, it is important to indicate whether it is a short-term (~5 minutes) or a long-term (≥24 hours) analysis as they reflect different underlying physiological processes and thus demonstrate different predictive power [107].Along with the segment duration, additional information such as the sampling rate of the signals will provide a clear reflection of the data set size.Downsampling the data to a low sampling rate (eg, 50 Hz) has also shown a significant impact on HRV analysis [108].Although all the reviewed studies mentioned the participant number, and majority of them (n=39) reported the sampling rate of the signals, very few provided details on the sample size or data set duration or whether the data set was resampled for subsequent analysis.These elements provide a clearer picture of the computational time and resources required for clinical validation and adoption.Although physiological recordings collected in the NICU environment suffer greatly from missing data due to similar factors that introduce artifacts [109], reporting how missing data are handled is scarce.Different methods for dealing with missing values could cause different results, and not all might be suitable for a particular problem.Therefore, it is important to report all the details related to the adopted approach.
The incomplete or partial reporting found in these studies has significant implications for the implementation of QMS in using these techniques for clinical adoption.A good implementation of QMS requires a comprehensive reporting of each intermediary step involved in constructing an AI and ML pipeline.The International Medical Device Regulators Forum offers guidance on the clinical evaluation required for any product intended for use as a medical device [110].According to the International Medical Device Regulators Forum guidelines, during clinical evaluation, relevant research articles are reviewed to identify clinical evidence supporting the product [111].The guideline encourages manufacturers to follow these recognized standards and best practices in the development, validation, and manufacturing processes.Clinical evaluations are required by the European Union medical device regulation, and it is also mentioned in the ISO 13485 (the quality management standard for medical devices).Thus, detailed reporting is crucial as it can be used by regulatory bodies to evaluate future SaMD products clinically.Steps such as the missing data handling procedures are also required by the TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis) checklist for model development and validation, which assesses the risk of bias and clinical usefulness of the prediction model [112].Another example is a questionnaire prepared by the German Notified Body Interest Group, and it was adopted to assess some AI-powered medical products in the European Union.This questionnaire includes inquiries about data management, including data collection, labeling, preprocessing procedures, and relevant documentation.Transparent and detailed reporting of these steps is essential to ensure the safety, efficacy, and reliability of SaMD.
Principal Findings
This review aimed to summarize the computational methods used for preprocessing preterm infants' physiological data as a first step in developing data-driven predictive models for adverse outcomes related to clinical decision support.This is an important step, especially from a clinician's perspective, because it increases the trustworthiness of the developed models by allowing for the verification and reproduction of the results.In addition, it aids in achieving regulatory compliance and ensures the safety, efficacy, and ethical use of AI-based health care devices.Furthermore, it allows us to recognize the shortcomings in the current state-of-the-art studies and recommend guidelines for transparent reporting.The review found that the studies were heterogeneous in terms of their methods and applications.Therefore, a narrative approach to reporting the results was taken instead of a quantitative approach.Through the analysis we identified several key components that were incomplete or partially reported by the included studies, which are summarized in Table 3.To ensure transparent reporting for any future studies in this area, we recommend detailed reporting of all preprocessing steps listed in Table 3, which will allow revealing their strengths and weaknesses and ultimately make them usable and reproducible.Reproducible research allows clinicians to make more informed decisions about patient care and treatment based on the evidence that has been thoroughly assessed.
Comparison With Prior Work
The reviews published in recent years have highlighted the potential of big data and AI in supporting clinical decision-making in the neonatal health care domain [10,15,21,113,114], particularly in using physiological data for detecting or predicting neonatal health outcomes.However, appropriate preprocessing of these data is a prerequisite for developing clinically deployable models.A systematic review by McAdams et al [10] reported different ML models used to predict different clinical outcomes in neonates.However, their primary focus was on 5 neonatal morbidities, and they did not focus on reporting the preprocessing methods applied before building the ML models.Furthermore, they did not include studies using real-time continuous physiological data; 28 out of their 68 studies were based on physiological data (not continuous), and the rest were based on electronic medical records and imaging data.Latremouille et al [15] performed a review on HRV analysis for neonates.The primary limitation of the work was the lack of reporting in detail about the preprocessing steps of ECG signals before HRV analysis, such as ECG handling and segmentation, R-wave (QRS complex) identification technique, software and parameters, and ranges of all HRV features.They identified these components as XSL • FO RenderX incomplete or missing in the studies they reviewed and thus recommended clear reporting of these aspects for future studies in this area.These limitations served as a motivation for our review to focus on the preprocessing techniques of neonatal physiological signals in a broader sense, which serves as the preliminary step for any big data-based approaches.
Limitations
There are several limitations to this review.Screening of all the included studies was conducted independently by 1 reviewer, which may have introduced bias.In addition, this review did not include a quantitative or comparative analysis of the reviewed studies, as the techniques used to analyze the physiological signals were diverse.Future work could include a quantitative evaluation of the studies that were homogeneous in design.
Conclusions
This review explores the computational methods used by the current state-of-the-art ML-driven clinical decision support approaches to preprocess physiological signals collected from infants treated in the neonatal setting.A summary of the studies identified heterogeneity in the techniques used for analysis and revealed a lack of consistent and detailed reporting, which is important for building robust, transparent, and clinically deployable prediction models.The availability of powerful hardware and software resources in the NICU environment and growing interest in big data and AI are driving strong demand for clinical decision support applications.We recommend clear reporting of the different steps in the preprocessing of the neonatal physiological signals to ensure transparency in clinical validation and accelerate the adoption of developed models in the clinical setting.This will further enhance the delivery and adoption of reliable, regulatory-compliant, safe, and effective products in health care.
Figure 1 .
Figure 1.PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram for the database search and study selection.
Table 2 .
Summary of the articles reviewed in this study, grouped according to the homogeneity in terms of the data sets used and sorted by the publication year.
Table 3 .
Required physiological signal preprocessing steps reported by each of the studies in this review. | 6,321 | 2023-03-02T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Host Mesh Fitting of a Generic Musculoskeletal Model of the Lower Limbs to Subject-Specific Body Surface Data: A Validation Study
Challenges remain in accurately capturing the musculoskeletal geometry of individual subjects for clinical and biomechanical gait analysis. The aim of this study was to use and validate the Host Mesh Fitting (HMF) technique for fitting a generic anatomically based musculoskeletal model to 3D body surface data of individual subjects. The HMF technique is based on the free-form idea of deforming geometrically complex structures according to the deformation of a surrounding volumetric mesh. Using the HMF technique, an anatomically based model of the lower limbs of an adult female subject (29 years) was customized to subject-specific skin surface data of five typically developing children (mean age 10.2 years) and six children with Cerebral Palsy (CP) (mean age 9.6 years). The fitted lengths and volumes of six muscle-tendon structures were compared against measures from Magnetic Resonance (MR) images for validation purposes. The HMF technique resulted in accurate approximations of the lower limb shapes of all subjects in both study groups. The average error between the MR data and the fitted muscle-tendon lengths from HMF was 4 ± 4% in children without CP and 7 ± 5% in children with CP, respectively. The average error between the MR data and the fitted muscle volumes from HMF was 28 ± 19% in children without CP and 27 ± 28% in children with CP, respectively. This study presents a crucial step towards personalized musculoskeletal modelling for gait analysis by demonstrating the feasibility of fitting a generic anatomically based lower limb model to 3D body surface data of children with and without CP using the HMF technique. Additional improvements in the quality of fit are expected to be gained by developing age-matched generic models for different study groups, accounting for subject-specific variations in subcutaneous body fat, as well as considering supplementary data from ultrasound imaging to better capture physiological muscle tissue properties.
Introduction
Computer models of the musculoskeletal system have widely been applied to biomechanical and clinical gait analysis. Musculoskeletal modelling has provided means to quantify muscle and joint function during walking that cannot be measured otherwise. In particular, muscular weaknesses or bilateral asymmetries can result in altered and potentially harmful internal tissue loading which cannot be investigated based on external observation alone. By combining data from optical motion capture with computational models of the musculoskeletal system, crucial insights have been gained into, e.g., muscle-tendon length changes during walking in patients with Cerebral Palsy (CP) to help in the targeted treatment intervention [1], as well as served as intermediate step for calculating muscle-tendon forces and joint loading to assist with rehabilitation intervention and monitoring [2].
Generic musculoskeletal models of the lower limbs have traditionally been adopted and crudely scaled to subjectspecific dimensions in order to analyze biomechanical parameters such as joint forces, muscle-tendon lengths, or lengthening velocities during gait for individual subjects [1,3,4]. Thereby, the term "generic" refers to a reference model or data set, commonly resembling the anatomy of an adult male or female subject without musculoskeletal injury or disease. In recent years, more advanced optimization algorithms have been introduced in an effort to improve the accuracy of musculoskeletal modelling results for personalized gait analysis [5][6][7][8]. Yet, the most widely used fitting algorithms remain based on the positions of bony anatomical landmarks, assuming that the skeletal system sufficiently reflects the subject-specific architecture of the entire musculoskeletal system.
There is growing evidence that the fitting of musculoskeletal models based on bony anatomical landmarks may lead to incorrect conclusions, especially for clinical gait analysis in patients with severe musculoskeletal impairments due to conditions such as CP. Muscle architecture has been found to be significantly altered due to CP [9][10][11], and bone deformities, commonly observed in children with CP, have been shown to significantly affect joint kinematics, muscletendon lengths, and muscle moment arms during walking [12,13]. Furthermore, bone deformities in the distal segments have been related to altered joint kinematics in the proximal joints and vice versa [14], and changes in the path of one muscle-tendon structure may affect the paths of neighboring muscles and hence the dynamics of the entire multibody musculoskeletal system. Such local differences in musculoskeletal architecture cannot be captured using generic musculoskeletal models that are simply scaled based on the positions of bony anatomical landmarks.
Magnetic Resonance (MR) and ultrasound imaging provide additional insights into the musculoskeletal architecture of individual subjects and have been considered for application to clinical gait analysis. Novel algorithms have been developed to automatically segment MR images based on previous knowledge from generic image data sets [15,16]; and fitting techniques have been introduced to morph generic models of individual organs to a limited number of subject-specific MR images [17][18][19]. Yet, the implementation of image-based fitting algorithms to widespread clinical practice has often been a challenge due to long acquisition times of MR imaging as well as high imaging and computational costs. The integration of ultrasound imaging to gait analysis is considered more feasible; yet, ultrasound imaging is confined to a small imaging field of view, e.g., calf muscles, and thus requires additional means of fitting the entire multibody musculoskeletal system to individual subjects [20].
The aim of this study was to use a free-form deformation technique known as Host Mesh Fitting (HMF) for fitting generic musculoskeletal models to 3D body surface data of individual subjects and assess its accuracy in an effort to address the persisting limitations in musculoskeletal modelling for personalized gait analysis. The HMF technique was initially introduced to develop subject-specific 3D models of individual organs [18] and was later applied and validated for predicting the deformation of muscle-tendon structures in the lower limbs during walking [21]. The present work extends on these previous efforts by aiming to fit a generic musculoskeletal model of the lower limbs of an adult female subject to 3D body surface data of children with and without CP and compare the fitted lengths and volumes of six muscle-tendon structures with the subject-specific muscletendon lengths and volumes derived from MR data. The use of 3D body surface data and HMF for fitting musculoskeletal models to individual subjects is expected to be particularly suited for gait analysis in population groups where bony anatomical landmarks are not sufficiently accurate and MR scanning not applicable due to time, cost, or ethical constraints.
Materials and Methods
2.1. Volumetric Host Mesh Fitting. In the following, the theoretical principles of the HMF technique are summarized. Given a generic 3D model of the musculoskeletal system with embedded tissue structures (e.g., muscles-tendon structures and bones of the lower limbs) and subject-specific skin surface data, the HMF process is divided into four steps ( Figure 1). In Step I "Model registration," the generic lower body mesh is aligned and homogenously scaled to subjectspecific dimensions by calculating an overall affine transformation matrix using the conventional positions of bony anatomical landmarks. The affine transformation matrix comprises rotation, scaling, shearing, and translation and is obtained by minimizing the distances between bony anatomical landmarks of the generic model and manually annotated subject-specific bony landmarks. In Step II "Recording local muscle position," the material positions of the muscletendon structures of the generic model are calculated with respect to the surrounding 3D lower body mesh in preparation for skin mesh fitting. In Step III "Data fitting of skin mesh," the registered lower body mesh is customized to subject-specific 3D body surface data to find the optimum mesh nodal degrees of freedom (i.e., mesh nodal parameters, including nodal positions as well as nodal derivatives in the case of bicubic-linear interpolation functions). In brief, the HMF objective function F u n is set up to find the optimum mesh nodal parameters u n that minimize the Euclidean distances between the subject-specific data points and their projections onto the lower body mesh in a least-square sense as follows: whereby p d denotes the coordinates of the projection points d = 1, … , D with respect to the lower body mesh, and s d is the corresponding global coordinates of the subject-specific target points, and δ u n , γ i is a 3D smoothing constraint called Sobolev function with userdefined penalty parameters γ i ∈ 0, 1 for controlling arc lengths, curvatures in element coordinate directions, surfaces area terms, and volume of the lower body mesh. Further details to the HMF objective function and 3D smoothing constraints can be found in [18,21]. Finally, in Step IV "Updating new muscle position," the spatial positions of the muscle-tendon structures are calculated according to the customized position of the lower body mesh. This is carried out under the assumption that the material positions of the muscle-tendon structures with respect to the surrounding 3D lower body mesh do not change during customization. The HMF algorithm is implemented in the modelling environment CMISS (http://www.cmiss.org). CMISS is an interactive computational modelling environment for Continuum Mechanics, Image analysis, Signal processing, and System identification, which has extensively been used for high-order subject-specific modelling of the musculoskeletal system [11,18,21,22]. CMISS has been developed as part of the International Union of Physiological Sciences (IUPS) Physiome Project [17,19] and is currently being redeveloped into the open source package Open-CMISS to make it more modular, extendable, easier to understand, and able to run on modern distributed-memory high-performance computers (http://www.opencmiss.org).
Validation.
The accuracy of the HMF technique was assessed by fitting a generic lower limb model to subject-specific data of children with and without CP and comparing the predicted muscle-tendon lengths and volumes of the fitted model with subject-specific MR data. A generic lower limb model of an adult female subject, which was previously manually developed based on subject-specific MR data [21], was used (age 29 y, height 165 cm, and weight 63 kg) for this purpose. The lower limb model comprised all lower limb bones, 20 muscles-tendon structures, and a volumetric representation of the skin boundary surface of each leg. All geometries of the musculoskeletal lower limb model were represented using high-order finite element meshes with bicubic-linear interpolation functions. Cubic interpolation functions preserve the continuity of the first derivatives of the geometric coordinates with respect to the element coordinates, which makes them ideal for smoothly approximating the curved surfaces of biological tissue with a minimum number of elements [21].
MR images of the lower limbs of six children with CP (mean age 9.6 years) and five typically developing children (mean age 10.2 years) were acquired on a Siemens 1.5T MAGENTOM Avanto System. Ethical approval was given by the NZ Northern Y Regional Ethics Committee, reference number NTY/06/07/064. Written consent was obtained from all children and their parents or guardians. Subject characteristics and scan protocol have previously been outlined in detail [11]. The image processing tools within CMISS were employed to automatically segment the skin boundary surfaces of the lower limbs. The positions of the following bony landmarks on the skin surface were manually identified according to standard protocols [23]: right/left asis, sacrum, medial/lateral epicondyles, and medial/lateral malleoli. The bony landmarks were used to register the generic model to the subject-specific surface data (Step I, Figure 1). The lower body mesh was then customized to subject-specific skin surface data and the new configuration of each muscle-tendon structure was calculated according to the customized lower body mesh (Steps II-IV, Figure 1). Muscle-tendon lengths and volumes of the fitted models were numerically derived and compared with subject-specific measures from MR images for validation purposes. The following six muscles were included in the analysis: soleus, gastrocnemius, semimembranosus and semitendinosus (represented as one muscle), biceps femoris, and the vasti group. Muscle-tendon lengths were defined as the average arc lengths between the most distal and most proximal ends of the muscle-tendon meshes, normalized with respect to segmental lengths. Muscle volumes were derived by performing numerical quadrature over the parameterized meshes (Fernandez et al., 2005), divided by body mass. The fitting error E HMF was defined as the relative difference in muscletendon length l, i.e., muscle volume V, between the fitted values from HMF and the subject-specific values derived from the MR images: 3. Statistical Analysis. Statistical analysis was performed to assess the significance of the differences in muscle-tendon lengths and volumes between the fitted and the subjectspecific values from MR imaging. All parameters were tested for a normal distribution prior to data comparison using the Kolmogorov and Smirnov method [24]. A repeated measure analysis of variance (ANOVA) with Tukey-Kramer multiple post hoc test [25] was performed to analyze the pairwise differences in muscle-tendon lengths and volumes between the fitted and the subject-specific measures from MR imaging. The data of the children with CP and without CP were analyzed independently as two different groups. Statistical analysis was performed using the statistical software Graph-Pad IntStat. The level of significance was set at p < 0 05 for all statistical test.
Results
A generic lower limb model of an adult female subject was fitted to skin surface data of children with and without CP using the HMF technique. Eleven bony landmarks and an average number of 1,858,218 (±845) data points on the skin boundary of each subject were used for the fitting process. The HMF technique resulted in smooth approximations of the lower body shapes of all subjects in both study groups ( Figure 2). The average Root Mean Square (RMS) error between the fitted lower body mesh and the subject-specific surface data from MR imaging was 3 7 ± 1 08 mm.
The average normalized muscle-tendon lengths derived from HMF compared to the subject-specific values from MR images are given in Table 1. Statistical analysis revealed that HMF led to accurate predictions of muscle-tendon lengths in the children without CP for all muscles except rectus femoris. In the children with CP, HMF led to accurate predictions of muscle-tendon lengths for soleus, biceps femoris, and the vasti group, while significant differences were obtained between the fitted and the MR-based values for gastrocnemius, semimembranosus-semitendinosus, and rectus femoris. The average fitting error (equation (2)) in muscle-tendon lengths from HMF was 4 ± 4% in the group of children without CP and 7 ± 5% in the children with CP, respectively.
The average normalized muscle volumes derived from HMF compared to the subject-specific values from MR images are given in Table 2. Overall, the prediction of muscle volumes was poor, with an average fitting error (equation (2)) of 28 ± 19% in children without CP and 27 ± 28% in children with CP, respectively. Statistical analysis revealed significant differences in the predicted muscle volumes from HMF compared to MR imaging for four muscles in the children without CP (soleus, biceps femoris, rectus femoris, and vasti group) and for two muscles in the children with CP (biceps femoris, semimembranosus-semitendinosus).
Discussion
The aim of this study was to address current limitations in subject-specific musculoskeletal modelling for personalized gait analysis by applying and validating the HMF technique to fit a generic model to subject-specific 3D body surface data. The HMF technique extends scaling of generic musculoskeletal models based on bony anatomical landmarks in that it comprises an affine transformation (rotation, translation, and scaling) followed by model customization to account for subject-specific variations in lower limb shape. High accuracies were obtained in the fitted lower limb shapes in both study groups with the RMS error between the subject-specific 3D body surface data and the fitted lower limb mesh being less than 5 mm for all data points. The accuracies in muscle-tendon lengths are also considered promising for having the potential to improve gait analysis results, with an average RMS error of 4 ± 4% in the children without CP and 7 ± 5% in the children with CP, respectively ( Table 1). The average RMS errors in muscle-tendon lengths in both study groups are below, or around the lower range, of previously reported errors in muscle-tendon length predictions using generic musculoskeletal models for clinical gait analysis, e.g., 6% to 50% [26]. However, the accuracies in muscle volumes were limited with large variations in both study groups compared to the subject-specific MR data ( Table 2).
The HMF technique is established under the assumption that the lower limb shape reflects the internal musculoskeletal architecture, which is a limitation of the proposed technique. It means that the relative positions of muscle-tendon structures with respect to the skin mesh remain constant during model fitting. If, for example, a thick subcutaneous fat layer between muscles and skin is present in the generic model, the relative thickness of the fat layer remains the same throughout HMF. Looking more closely at the MR images (Figure 3), it becomes apparent that significant differences existed in muscle volumes between individual subjects. In particular, children subjects had less subcutaneous fat compared to the adult female subject, which could partly explain the unsatisfying prediction of muscle volumes compared to muscle-tendon lengths. Interestingly, the average RMS error for muscle volumes was slightly lower for the children with CP than the children without CP, which is an unexpected result ( Table 2). Based on the MR images (Figure 3), it appears as if the percentage of muscle tissue versus fat tissue in children with CP more closely resembled the adult female anatomy, e.g., thicker fat layer with less muscle tissue, which may explain the unexpected outcome. Thereby, the volumetric tissue distribution critically affects the inertia properties of the multibody dynamic system and hence gait analysis results. These insights suggest that additional skin fold measurements may help to improve model fit by allowing to adjust the relative thickness of the fat layer, and thus segmental inertia properties, for individual subjects.
The time needed to develop musculoskeletal models by manually segmenting MR images is lengthy and can take several months. Currently, the modelling software CMISS contains a library of MR-based lower limb models of one female subject, six children with CP, and five typically developing children, which were adopted in the present work. The present goal to accurately fit a generic model of an adult female subject to the anatomy of children with severe gait impairments due to CP was ambitious. It is likely that more accurate results can be obtained when fitting the generic model to subjects of similar age and without significant musculoskeletal impairments. Nevertheless, the present results are promising and considered the first step towards an advanced modelling framework for subject-specific simulation and analysis of human movement. In addition to the MR-based lower limb models within CMISS, data from gait analysis was acquired in the same subjects. This unique dataset will allow the comparison of muscle-tendon length calculation during walking between generic and HMF-fitted musculoskeletal models as a next step. Furthermore, an extension of the model library based on the Visible Human Dataset from the U.S. National Library of Medicine, which includes Computed Tomography and MR images of one male and one female cadaver, is planned. The Visible Human Dataset has been applied to musculoskeletal research, educational, virtual reality, industry, and diagnostic purposes and thus will provide widely accepted reference models for future use.
The solution of the HMF objective function (equation (1)) is, in the present form, dependent on the geometry of the lower body mesh (i.e., mesh nodal degrees of freedom) and the magnitudes of the Sobolev smoothing constraints. Both, the geometry of the lower body mesh and the Sobolev smoothing constraints, have not been linked to physiological or anatomically based principles but were defined according to previously established kinematic criteria [18]. Kinematic surface-based deformation methods have extensively been used in computer graphics research [27]. Yet, they are traditionally not considering biological soft tissue as elastic solids subject to Newton's laws of motion. In recent work, Kadleček et al. [28] introduced a physics-based model fitting technique to find the optimum shape of a musculoskeletal model based on several 3D body surface scans that minimizes the deformation energy, corresponding to the elasticity of biological soft tissue. The consideration of a so-called elastic potential to find the optimum fit solution (equation (1)) while complying to Newton's laws of motion for soft tissue is promising and may offer the potential to improve the accuracy of the HMF fit for subjects with various degrees of subcutaneous body fat versus muscle tissue.
Additionally, data from ultrasound imaging may allow further insights into mechanical tissue properties to advance the HMF technique based on anatomically aware principles [20]. Capturing subject-specific mechanical properties of soft tissue is particularly important when aiming to analyze kinetic variables, e.g., muscle forces, in patients with musculoskeletal disorders such as CP. Yet, taking subject-specific tissue samples in vivo for refining musculoskeletal models remains highly invasive and very compromised. Ultrasound data would allow to better capture mechanical properties of muscles at the tissue level, e.g., physiological cross-sectional area and fiber pennation angle, which in turn affect muscle mechanics. Ultrasound imaging is relatively inexpensive, does not involve ionizing radiation, and requires much shorter scan times compared with other imaging modalities such as MR imaging. Thereby, an anatomically aware deformation method was recently introduced by Saito et al. [29] to predict the growth and size of muscles by discretizing the anisotropic stretch in the direction of muscle fibers. The integration of muscle fiber structures into the present musculoskeletal modelling approach is highly feasible. In particular, a muscle fascicle description has already successfully been integrated into the muscle organ models in CMISS and fitted to subject-specific ultrasound data with good qualitative agreement to diffusionweighted MR images [30].
In this study, the skin boundary surfaces of individual subjects were segmented based on MR data, though body surface scanning could be used to capture the outer skin surface of individual subjects in future work. Body surface scanning, frequently used in anthropometric body shape analysis and obesity research, offers inexpensive, rapid, and noninvasive means to characterize the skin boundary in vivo [31] and would make the application of the HMF technique feasible in clinical settings. Thereby, the numerical algorithms associated with HMF, as well as the library of MR-based musculoskeletal models, are currently transferred into the open source modelling environment Open-CMISS (http:// www.opencmiss.org/) to provide the most advanced and accessible numerical tools for physiologically based modelling of deformable organs, e.g., muscle tissue across multiple scales, including multibody dynamic analysis [19,21,22,30].
Conclusions
The current study presents a crucial step towards personalized human movement analysis by demonstrating the feasibility of fitting a generic musculoskeletal model of the lower limbs to skin surface data of children with and without CP. The musculoskeletal models of the lower limbs and fitting algorithms are planned to be further developed and shared between research centers through the IUPS Physiome Project [19] and coupled with experimentally measured gait data for dynamic simulations of walking. Additional improvements in the quality of fit are expected to be gained by developing age-matched generic models for different study groups, as well as taking into account subject-specific skin fold measures and mechanical properties of muscle tissue based on ultrasound imaging. It is anticipated that the application of personalized musculoskeletal models to movement analysis will lead to crucial new insights into the complex relationship between musculoskeletal architecture and function during dynamic activities and thus assist in the assessment and management of movement pathologies due to conditions such as CP.
Data Availability
The MR image data used for this study are restricted by the New Zealand Northern Y Regional Ethics Committee in order to protect patient privacy. The data is only available to researchers who meet the criteria for accessing the confidential data. Further information can be obtained from the corresponding author Dr. Katja Oberhofer (katja.oberhofer@hest.ethz.ch).
Conflicts of Interest
All authors declare that they have no proprietary, financial, professional, or other personal relationships or obligations of any kind with other people or organisations that could inappropriately influence their work. | 5,510.4 | 2019-02-17T00:00:00.000 | [
"Engineering",
"Medicine"
] |
A Comprehensive Survey of IoT-Based Cloud Computing Cyber Security
The flexible architecture offered by cloud computing allows for the dispersion of resources and data over numerous places, making it possible to access them from a variety of industrial settings. The use, storage, and sharing of resources such as data, services, and industrial applications have all changed as a result of cloud computing. In the past ten years, companies have quickly shifted to cloud computing in order to benefit from increased performance, lower costs, and more extensive access. Additionally, the internet of things (IoT) has significantly improved when cloud computing was incorporated. However, this quick shift to the cloud brought up a number of security concerns and challenges. Traditional security measures don't immediately apply to cloud-based systems and are occasionally inadequate. Despite the widespread use and proliferation of various cyber weapons, cloud platform issues and security concerns have been addressed over the last three years.Deep learning's (DL) quick development in the field of artificial intelligence (AI) has produced a number of advantages that can be used to cloud-based industrial security concerns. The following are some of the research's findings, We provide a detailed evaluation of the structure, services, configurations, and security fashions that enable cloud-primarily based IoT. We additionally classify cloud protection dangers in IoT into four foremost areas (records, network and carrier, programs, and gadgets). We discuss the technological issues raised in the literature before identifying key research gaps. In each class, describe the boundaries using a popular, artificial intelligence, and in-depth studying attitude. and security concerns relating to individuals), which are fully covered, we find and analyze the most recent cloud-primarily based IoT attack innovations, we identify, talk, and verify key safety challenges show the regulations from a standard, synthetic intelligence, and deep learning perspective in every class angle, we first present the technological difficulties identified in the literature before identifyingIoT-based cloud infrastructure has significant research gaps which should be highlighted for future research orientations. Cloud computing and cyber security.
Introduction
A large community of IoT-enabled devices and applications makes up a web of factors (IoT)-based cloud structure. Servers, garage, underpinning infrastructure, actual-time processing, and operations are all blanketed within the infrastructure. IoTprimarily based cloud architecture also includes the requirements and services required for safeguarding, managing, and connecting various IoT packages and devices. The traditional IoT architectureis shown in Figure 1, and the IoT-based cloud attack model is shown in Figure 2. The recent decade witnessed the advent of the cloud, and the next decade is seeing the rise of its varieties. [1][2][3] We observe that among these variations, IoT, or the internet of things, is leading (IoT). Others, however, like service architectures and distributed cloudrecent trends follow it in habitats, data centre operations, and management domains. 4 According to a recent Gartner article, cloud computing is one of the top ten strategic technological trends for 2020. 5 with the market for cloud services likely to increase by 17% during that year. According to. 6 the phrase "cloud computing" was originally used to describe platforms for distributed computing in the 1990s. As an instance, Amazon delivered Elastic Compute Cloud (EC2) in 2006. 7 Just like something like this, Google launched the beta model of Google App Engine in 2008. 8 NASA released the primary aspect of open-source software, named Open Nebula, for the deployment of hybrid and personal clouds in 2008. 9 In 2008, Microsoft released Microsoft Azure, 10 and in 2010, its open-source cloud computing venture Open Stack was introduced. 11 IBM developed the IBM smart cloud framework in 2011. After then, the first Oracle Cloud started offering software as a service (SaaS), platforms as a service (PaaS), and infrastructure as a service (IaaS) in 2012. More developments in the digital world are on the horizon, therefore this adventure is still ongoing. The countrywide Institute of requirements and era (NIST) has highlighted 5 fundamental elements of cloud computing. 12 One is measured service, two is resource pooling, three is quick growth, four is network access, and five is on-demand selfservice. Three service types and four deployment models are also described in order to simultaneously provide cloud services. The fundamental goal of cloud computing is to provide online computing services such as servers, storage, databases, networking, software, analytics, and intelligence. Customers can also pick the type and quantity of offerings that excellent healthy their need. Due to the cloud's speedy data storage and access, cost effectiveness, convenience, and flexibility, traditional IT services have migrated there. Organizations no longer have to spend a million on expensive equipment and software to set up on-site data centers thanks to cloud computing. By the use of remote servers to host software and services, cloud technologies automate various sectors. The majority of sectors are currently following this trend, which is growing with each passing year. 13 For a wide range of industrial applications, traditional Cloud computing offers scalability and frequent software and hardware updates. [14][15][16] Additionally, the cloud offers a variety of security options and allows for effective network resource utilization. With these benefits, it is clear that cloud computing has a lot of promise. An extensive range of destiny opportunities for industries are offered via cloud computing and its supporting technologies, which also have the ability to open up some of packages, solutions, offerings, structures, and other things. Via using DL cloud computing, massive datasets and education algorithms can be consumed. With the aid of utilizing the GPU's processing capability, it could additionally assist DL fashions attain efficiency on a big scale at an inexpensive value.
The success of any cloud-based solution strongly depends on giving cloud directors, software program developers, and cease users the greatest viable enjoy. The adoption of clouds is constrained through unique factors such complexity, compliance, security, dependency, privateness, manage, and value. 17 considering the fact that records and packages might also live at many layers depending at the cloud carrier architecture adopted, security can be seen as a critical barrier in cloud computing. Because of this uncertainty, security has now been identified as the top cloud computing risk. 18 Gartner identified four key issues that could influence cloud adoption in January 2020 as distributed multi-cloud scenarios become more common. 19 Considered one of them is coping with related safety and privacy problems. In conjunction with digital environments, cloud carrier enables for the distribution of heterogeneous statistics and sources. In contrast to the limited storage space, computation power, and hardware that users have access to in traditional software infrastructure for enterprises, users of cloud computing have access to infinite storage space and virtual servers resources as needed. Conventional
Fig. 3: Cloud computing history
methods for user identification, authentication, and access management cannot be adapted for the cloud in their current state.Significant security concerns include integrated models, architectures, less user control, and external data storage, User vulnerability will lead to an increase in cybercrimes affecting the people, businesses, and authorities.
Crypto-jacking, denial of service attacks, account theft, and data breaches are all common threats. according to Forbes,20 Skybox security published a Vulnerability and chance developments document within the center of 2019, with a wide spread upward push within the number of vulnerabilities in cloud bins as the report's fundamental locating (a replacement of conventional VMs architecture). Data on the cloud are more vulnerable to threats than they are in traditional storage architecture. This is so that just the cloud platform, not the client data, is secured by cloud providers. 82 percent of cloud users have encountered security events, consistent with the Oracle and KPMG Cloud risk record 2019. 1 Therefore, it has become crucial to guarantee cloud security and privacy.
The most crucial element for cloud computing to succeed is security, according to research. 12 In 2011, 12 the placement of data was noted as a security problem. Concerns about data security were raised. 14,15 Another element that researchers concentrated on was trust because it is closely related to the reliability of cloud service providers. The provision of the trust version and subsequent agreement with management were critical. Because cloud computing has inherent security difficulties, trust is ultimately the most important consideration. 16 The same data assaults that affect traditional systems also affect cloud-based services. It was noted that the security of the virtual machine was crucial for the integrity of the data stored there and the security of cloud computing. 17 In order to understand smart IoT cloud systems, references. 18 provides a survey of research articles from the previous five years that focused on consumer-oriented IoT cloud applications. The author conducted a security analysis of the IoT cloud gadget and presented a novel IoT cloud paradigm. Reference presents a paradigm for analyzing privacy and security challenges in social networks built on cloud platforms. 19 From a technological perspective, 30 explores both wellknown and less-examined security risks associated with cloud systems for various hacks.
As reported on a triangulated study of the cloud computing difficulties. This three-part study examined the security issues that currently surround cloud computing. 11 In light of these challenges, the study also suggested implications for cloud computing adoption.moreover, writers in 12 supplied every other thorough exam of a safety trouble by using comparing the dangers that cloud systems face in addition to the numerous intrusion detection and prevention methods that are currently in use. moreover, 13 examined the usefulness of query processing algorithms over encrypted data in a high throughput cloud-based device for a real-time context. The multi-dimensional imply failure cost (M2FC), which turned into identified as a quantitative safety danger evaluation version in opposition to the security issues raised with the aid of those researchers, turned into proposed by 14 in 2016. They also counseled appropriate measures to clear up the diagnosed security problems.
The net of factors, cloud computing protection challenges, and cloud accountability troubles were all covered by using the authors of. 15 The authors of reviewed the factors influencing cloud computing adoption, cyber attacks, and suggested remedies for improving privacy and security in cloud-based systems. 16 The authors of 17 provided a thorough overview of the research on cloud security challenges, vulnerabilities, threats, and attacks, as well as a classification system for them. In order to more effectively protect data, authors in 18 recognized privateness techniques in IoT-based totally cloudbased structures. Ultimately, writers in 19 furnished a top level view of the most important protection issues in cloud computing and cloud infrastructures primarily based at the internet of factors.
Methodology
Based on previous research investigations, the proposed research survey is carried out. We create a good method for paper selection. We choose papers from various sources based on the following screening process.
Paper Structure
The majority of the essay is organized as follows.
Background information on cloud designs, cloud kinds, and the SPI model is provided in Section 2.
A thorough overview of prior work that has addressed security challenges in cloud computing is provided in Section 3. The cloud configuration is presented in Section 4. Information on cloud-based assaults is detailed in Section 5. Details on security-related issues is provided in Section 6. The challenges and restrictions of cloud computing are then discussed in Section 7. Section 8 presents future work. Finally, Section 9 brings our study of the highlighted security vulnerabilities to a close.
Background
Cloud computing and the Internet of Things have recently been the most popular technologies. 43 Current trends predict that the rate of development of digital technologies will be exponential, and the combination of these two technologies can result in efficient resource management. Before moving on to security concerns and problems, this element offers a quick advent of the various cloud designs, cloud kinds, deployment fashions, and related attacks. A new aspect of DDoS assault termed as financial denial of sustainability (EDoS) has evolved in the contemporary IoT-based cloud computing generation. 14 EDoS is defined as the expanded use of flexible packaging, using the server-shared dimension service as an example Electronics 2022, eleven, sixteen 7 of 34. (Something cloud server). Remote sensing robots that reload the target cloud service with a covert vulnerability detector request can conduct EDoS attacks. Students will have access to cloud services in this manner with scalable customization. A client account can be obtained by adhering to the pay-as-you-go concept. After that, demand that the clients accept cloud services in order to charge them for these incorrect orders.
The worst consequences will result from these flaws. This will result in the loss of cloud computing clients, as they will opt for the less expensive and more effective option and support their operations from their corporate offices and data centers rather than making irrational demands to the cloud 15
Cloud Architectures and Deployment
To address both small-and huge-scale commercial enterprise troubles, cloud architecture consists of numerous cloud components, including statistics centers, software program functions, offerings, and programs, organized optimally. The purpose of cloud structure is to provide quit customers excessive bandwidth, reliable get right of entry to their records and packages, and an on-call for network this is relaxed and adaptable. [6][7][8][9][10] The components and interactions between those components are typically laid out in cloud architecture. The following are a few crucial elements of generic cloud architecture records and resources on-hand on the purchaser, records and assets in the cloud, software additives and offerings, and middleware are indexed in that order. The cloud's features and surroundings can be diagnosed primarily based on its deployment model. Figure 4 represent the deployment model. Figure 5 represents the NCC-SRA method for data series; aggregation, and data category with cloud recognition.
Service Models
By the type of service model it provides, cloud computing can be divided into one of three categories. Infrastructure as a Service (IaaS) (IaaS) IaaS capacity is raw storage space, processing power, or network resources that the customer can use to run and execute any operating system, application, or software of their choice.
Public Cloud
In this kind of cloud, there are public clouds owned and run by various organizations. These resources, infrastructures, and networks are used concurrently by thousands of individuals and organizations. A number of the public cloud providers consist of Google, Amazon, and Microsoft. On this type of cloud, useful resource allocation, possession detection, shared access control, and cloud data security from attacks are critical issues. Reliability, geographical independence, software-fashion costing, price effectiveness, extremely good scalability, and versatility are benefits of using public clouds. Low safety and limited customization are drawbacks of the use of the general public cloud. 12
Private Cloud
This type of cloud is frequently managed by a single commercial enterprise and is more specifically designed to meet those requirements. Private cloud garage permits businesses to have better data control (perhaps susceptible to regulatory compliance necessities). Either a third party or internal staff can administer and host it. These details may also include trade secrets, medical records, or other sensitive information. The infrastructure is owned and run by the same company. The infrastructure in private cloud solutions is either managed or used by the organization, or it is supplied by the cloud service or infrastructure provider. In contrast to other cloud systems, private cloud security is crucial. 13 It is easier to identify users and vendors, as well as manage security concerns, than it is in a public cloud. Utilizing a private cloud has the benefits of increased security, privacy, control, affordability, and energy efficiency. Private cloud usage has drawbacks including rigid pricing and reduced scalability because of few resources. 12
Hybrid Cloud
In the hybrid cloud approach, a private cloud is connected to one or more external clouds. Multiple cloud environments with manageable and portable workloads are connected and managed centrally in this way. For instance, a company can manage security between private and public clouds while keeping sensitive data in the former and generic data in the latter. The security of the hybrid cloud is thought to be more trustworthy than that of the public cloud. Utilizing a hybrid cloud has the benefits of flexibility, scalability, security, and cost effectiveness. Networking difficulties and security compliance are drawbacks of hybrid clouds. 12
Multi Cloud
This model includes a system with many clouds. Clouds can be private or public, and they are now not usually connected. That is frequently known as a network cloud in literature. The blessings of the usage of a multi-cloud system consist of resource pooling and higher protection than a public cloud Electronics 2022, 11, 16 9 of 34. The disadvantages of public cloud include lower security than private cloud and the need for management policies to be controlled. 12 As was already mentioned, cloud architecture has advantages and disadvantages.
The model used will depend on how much storage, availability, efficiency, and security are needed by the user and the organization.
Cloud Services Software as a Services (SaaS)
The SaaS model gives users access to databases and software. Applications allow users to get information. 14 Customers do not need to install software on their local PCs due to the fact this version renders apps inside the cloud across a network. 15 The cloud provider installs, hosts, and operates software on the cloud, and the user gains access via the cloud customer. As a result, a single service instance can serve many users. The hosted application is run by the CSP, who also oversees and guarantees the system's uptime. 16
Platform as a Service (PaaS)
Databases and alertness platforms are made to be had to users as a service under PaaS. It combines running device and alertness servers, such as Google App Engine, Microsoft Azure, and the LAMP platform (Linux, Apache, MySQL, and Hypertext Preprocessor). The PaaS approach improves application efficiency and places a strong emphasis on data security. The cloud service provider offers a platform that lets users create, run, and administer applications without having to worry about the difficulties of setting up and maintaining the infrastructure. By allowing the applicationhosting environment to control the network, storage, and processing infrastructure, users lose control of the network, storage, and processing infrastructure. 17 Users of this paradigm consequently feel less in control and have access to fewer operational features.
Infrastructure as a Service (IaaS)
IaaS offers computational energy and garage as standardized community offerings if the customer is given online offerings to get admission to, procedure, store, transmit, and execute their programs and facts through the cloud. Virtual machines (VMs) are used to grant computing resources, and block storage and object storage are used to grant storage resources. 18 To workout manipulate over the deployed OS, storage, and programs, the consumer does not always want to exercise manipulate or management over the underlying cloud infrastructure; every now and then, there is limited control over a small range of network additives.
Construction as a Service (DaaS)
An internet-primarily based community shared improvement tool is shared by using several users consistent with the DaaS idea. This is comparable to using a development tool locally on a computer in a conventional paradigm. In the world of software development, this is a recent trend.
Forensic consulting (FaaS)
In terms of large (petabytes) storage for collecting useful forensic statistics and sources for high computational power, cloud forensics significantly outperforms traditional digital forensics. 9,10 The FaaS architecture is specifically made to assist forensic investigators in centrally evaluating a significant volume of data that is physically unreachable or located at an unknown physical location. Electronics 2022, 11, 16 10 of 34 Investigators continuously collect data and transfer it to a centralized system. Investigators are able to examine a tiny group of traces from enormous stacks as a result,
Information Sharing
Unauthorized users may receive information as a result of some nefarious activity. One method of leaking such information is through VM configuration theft, 11,12 which searches for open ports to find services and the vulnerabilities connected to them. 13 In the cloud, both internal and external disclosure is possible.
• An internal disclosure is when a manager or employee accidentally makes private information public, which would result in such a revelation. Insufficient awareness of the sensitivity of information or a lack of care in shredding could lead tosuch admissions. Internal attacks may put some users in danger and allow total in charge of them. 14 • An outside disclosure is one which seeks to accumulate the gadget-specific facts of the provider information.For instance, it could comprise temporary files, backup files,Version numbers, patch levels, and software distribution. In order to stop such attacks where there is a possibility of information leakage, third-party authentication and the usage of encryption techniques is common. 16
Additional Work
The use of cloud computing has grown significantly in recent years. Several studies have therefore addressed protection threats, vulnerabilities, troubles, problems, and countermeasures. This section covers the associated cloud computing work security issues.
The writers of 15 explored cloud computing topologies, security concerns, problems, and solutions. While discussing the supporting technologies, the study additionally addressed contemporary deployment fashions, cloud offerings, and cloud architectural frameworks.
While in 11 the authors emphasized the significance of information safety in cloud computing and defined the drawbacks of information leaks or breaches in cloud computing, the findings of this look at have been utilized to discover open research guidelines within the cloud safety area.Nevertheless, 14 omitted to address the issue of data leaking and how it is resolved, as well as how important data are compromised and leaked via cloud computing. 16 Examined cloud computing architectures, carrier models, deployment models, and cloud components, and safety challenges; however they did no longer examine the literature's hints for resolving the issues.The authors identified data transfer in the cloud-related security risks. They disagreed at the deserves of using public key infrastructure (PKI), the lightweight directory get admission to protocol (LDAP), and the position of a depended on third party (TTP) as safety features to ensure the supply, authenticity, confidentiality, and integrity of facts in the course of connections. The authors of 17 conducted a qualitative exam of each carrier model's vulnerabilities and associated threats. They also suggested defenses to boost cloud computing security. The authors' main emphasis in 8 is on the vulnerabilities and related dangers that are raised by the flaws. The authors avoided addressing existing issues brought up by the identified vulnerabilities and dangers as well as potential future research directions.
The authors of 18 mentioned a vacuum within the literature regarding the mapping of protection challenges to their corresponding answers and the requirement for a fashionable framework for generalizing the idea even as carrying out an intensive evaluation of particular needs. The authors also talked about unresolved issues and potential future research areas. Resource scheduling and cloud security were the subjects of a thorough literature study conducted by the authors of 19 in order to identify the pertinent work completed to date. The writers of identified several risks and any potential literature-based solutions. [9][10][11][12] DL asserts success in a variety of cloud computing domains, including speech, picture, and biomedical data processing. [13][14][15][16] DL architectures are configured as multilayer neural networks, and they are capable of transforming data into more abstract expressions and higher levels. Assume the data are high dimension already. Then, high dimension data input can be rebuilt using different neural networks (NNs) trained with a shallow core layer, resulting in low quality data. 7 It was suggested that by improving the inherent characterization of the data from these features, better categorization or data visualization may be accomplished. Functions can be broken down into simpler functions to better understand formations by using specific data. The various layers of the artificial neural network were found to contain an unusual characteristic of learning capacities by. 18 These authors also suggested layer-by-layer "pertaining" and weight tuning in nonlinear autoencoders as solutions to the problem.
In 19 , authors explored data security issues from the viewpoint of Nepal, a developing nation, in 2019.
The study outlined the difficulties that developing nations confront, which includes confidentiality, charging models, breaches, segregation, access, integrity, security, storage, records centre operation, billing models, costing models, and proximity.
The key security problems identified by this research's conclusions were storage, virtualization, and networks. Based on the security hazard provided via public clouds and cautioned protection techniques, authors in 10 tested the public cloud security safety technique.
It is necessary to draw attention to and deal with new security concerns in the cloud computing industry. The information from related surveys and studies about security concerns in cloud computing that were previously published is summarized in Table 1.
Configuring the Cloud
The cloud is set up to offer services to customers while utilizing a secure connection and delivery method. NIST claims that the configuration process can be broken down into five distinct tasks that are carried out by individuals in various positions. Table 2 displays the cloud's configuration and lists the organizations and their respective roles. 11,12 The cloud corporation uses all of its resources to meet customer demands. The five jobs in table three are stuffed with the aid of individuals who take part in duties like cloud computing transactions, that's why the cloud also concentrates on risks and dangerassessment of cloud purchasers and cloud carriers.
Cloud Consumer
A group of people who use cloud providers' services are referred to as cloud consumers. The cloud user may be presented with a variety of services from which to choose the best one and enter into a contract. To complete the settlement, the cloud customer bureaucracy enters into a service stage agreement (SLA) with the cloud issuer, and then they examine the service's technical overall performance.
Provider of Clouds
A special organization that provides offerings to cloud customers and closes deals on behalf of cloud companies is known as a "cloud issuer." Cloud Broker Manage the use, performance and delivery of service. 5 Cloud Carrier Provide connection and transport cloud services.
In SaaS, cloud service providers offer services for the deployment, upkeep, and updates of software and applications. On the other side, with PaaS, the cloud software program offers the infrastructure and surroundings components, consisting of the database, pinnacle software program, or other vital factor. The bodily laptop assets, which include storage, servers, networks, and web hosting infrastructure, are acquired through the cloud issuer.
Cloud Inspector
A team of specialists known as the cloud auditor can independently check cloud services for any anomalies. The auditing institution inspects the requirements via inspecting tangible portions of evidence, and the cloud auditor additionally examinesthe privacy impacts, security measures, and efficiency of all cloud-related procedures.
The Cloud Broker
The management of performance, consumption, and the provision of cloud services is handled by the cloud broker. In preference to contacting cloud providers at once, cloud clients utilize cloud brokers to attain cloud offerings.
Cloud Carrier
Among the cloud issuer and the cloud client, the cloud provider establishes a connection. Making use of this connection, the cloud offerings are provided to the user across the community. The cloud company is also in charge of retaining a comfy connection.
Cloud-Based IoT Attacks
Cloud service providers are often thought to be in charge of cloud security. However, more and more enterprises, data, and applications are being moved to the cloud in recent years. 13 Cyber attackers' priorities have altered as a result, and they now consider cloud services to be a more profitable target. 14 Figure 6 provides a representation of the cloud system's components, attacks, and vulnerabilities that can be examined to identify new points of weakness. Security threats in cloud computing are the most serious concern when investing in cloud services. This is due to the fact that a third-party provider stores and processes the user's information without the user's knowledge. Every day, users are made aware of issues such as weak authentication, stolen credentials, account hacking, data breaches, and so on. IoT cloud computing is used as part of a collaboration to store IoT data. A cloud is a centralized server that houses accessible computer resources at all times. The internet of things has produced massive data packages, which can be easily sent using cloud computing. Unlike the IoT, where scenario detection depends on the combination of data, the old internet connects users by using physical linkages between web sites. The characteristics of IoT-based cloud attacks are shown in Table 3.
Hijacking an Account
This is a type of attack when a hacker steals or commandeers the cloud account of a person or business. Sometimes the main objective is the person or business, and other times the attacker utilizes the stolen account information to carry out a subsequent attack. A later impersonation by the attacker could result in the leakage of confidential corporate information and sensitive individual data as well as reputational harm. 15 Figure 7 illustrates this attack in visual form. Businesses and numerous organizations can safeguard their data by adopting simple measures. Within a cloud.
The subsequent are a number of the only techniques to prevent cloud account hijacking.
• Check along with your provider company to affirm that personnel who've direct access to the server have undergone historical past exams. • Have a solid authentication plan in place for cloud app users. • Block access to cloud apps from specific IP addresses. Many cloud applications allow users to select IP ranges from which to connect to the service via a VPN or their workplace network.
Attacks on a Denial of Service
The most frequent and straightforward attacks on IoT systems are denial of service assaults. This type of cloud attack can be quite harmful because it prevents the intended user from accessing the services, applications, or data. 16 to be able to deprive different requesters of provider, the attacker bombards the focused machine, application, or service with numerous requests until ordinary site visitors becomes tough to deal with. It ultimately will become irresponsible for the cloud service proprietor to boost the elasticity tiers to control the rising visitors and use extra digital resources to fulfill the request and preserve the nice of carrier (QoS). Additionally, denial of service can operate as a catalyst and a smokescreen to hide malicious actions that get past the cloud firewall, which means that it can swiftly spread to destroy multiple devices rather than just one. 17 DoS attacks are used to prevent customers from accessing cloud networks, IoT, and other computer services. A denial of service (DoS) attack in the Internet of Things (IoT) aims to bring a system or network to a halt and make it unavailable to its intended users. Although DoS attacks are difficult to detect and avoid, we outline several approaches.
Prevent Spoofing
Use filters to stop spoofing of dial-up connections and make sure that the source IP cope with of the traffic fits the list of addresses for the website online of starting place. Limit broadcasting: By sending requests to every device on the network, attackers usually escalate their attack. By restricting or turning off broadcast forwarding whenever possible, attacks can be stopped. Users can also disable the echo and charging services when it's practical.
Streamline Incident Response
By streamlining incident response, the security team will be better able to react rapidly when DoS threats are discovered.Safeguard endpoints Verify that any known vulnerabilities have been fixed on these endpoints. All endpoints with the ability to run EDR agents should have them installed. Firewalls should be activated, Whenever possible, make sure the firewalls limit traffic entering and leaving the perimeter.
Attacks by Phishers
Phishing attempts against cloud service providers entice clients by sending them a file or image and requiring them to log in with their account information in order for you to access it.in this type of assault, the perpetrators ship phishing emailsto get login information for personal or corporate accounts and access to sensitive information in order to set up the attack and avoid being discovered. 18,19 Figure 9 shows a graphic picture of this assault. A cloud computing system is susceptible to two different kinds of phishing assaults. The first involves taking control of the accounts via customary social engineering approaches, and the second involves abusive behavior in which the attacker hosts How can phishing be stopped, and how might a cloud help? Provide a solution? Users must follow these steps to guard against phishing attempts on the IoT device. Steps.
• Be careful when using any emails or websites. • Double-check a link before clicking on it. • Avoid mailing any private or business information. • Finally, alert website and email administrators to any questionable activity.
From this list, it is possible to see how a cloud solution might be useful. By limiting access to dangerous files and filtering incoming correspondence, a cloud-based email system, for example, can detect and assist in the defanging of malware. It can also provide the two-way communication required to warn the user and others about phishing attempts. The information gathered aids in the enhancement of the software's
Attacks Utilizing Malware Injection
Malicious programmes and services are attempted to be injected into the cloud during a malware injection attack. 11 With the cloud idea in thoughts, the attacker executes this assault the usage of a spread of strategies. The attacker starts offevolved by way of creating a malicious service utility module or virtual gadget example on its own, then attempts to feature it to the cloud.
Attacks Using Port Scanning
In this type of assault, attackers may find open ports and utilize them to launch an attack on services using those ports. 13 The confidentiality and integrity of the cloud may be compromised as a result of this kind of attack 12,14 Figure 11 suggests a picture representation of the port scanning assault. Having enough, current danger statistics that is in sync with the evolving danger panorama is vital for stopping a port test assault. To display ports and prevent malicious actors from getting into their community, corporations also need dependable security software program, port scanning gear, and protection indicators.Tools like Net cat, Nmap, and IP scanning are all helpful. Following are some of the defense mechanisms:
A Strong Firewall
The use of a firewall effectively prevents unauthorised access to a company's private network. It manages ports, determines their visibility, and detects when a port needs to be changed. While a scan is running, it is turned off.TCP wrappers the use of them, directors can allow or deny get rights of entry to servers based totally at the domain names and IP addresses. Discover community vulnerabilities: Businesses may use a port scanner to determine whether other ports are being left open for no apparent reason. They must perform regular system audits to identify any vulnerabilities or gaps that a malicious threat could exploit.
hard, in this case, prevention is vastly superior to treatment.The most effective method of preventing a man-in-the-middle attack is to use a strong encryption technique between the client and the server. The connection cannot be established until the server sends and examines a digital certificate to authenticate the client's request.
While designing and marketing IoT devices, identification and authentication should be taken into account. on account that a man-in-the-middle assault revolves round delivering false facts and posing as a device to every other tool or user, users need a mechanism to affirm that the devices and those they connect to are who they claim to be.
To further protect themselves from man-in-themiddle attacks, users should take the following steps: Use virtual private networks (VPNs).
• Use HTTPS to ensure the security of critical online transactions and logins, • Create unique Wi-Fi networks, • Encrypt emails using SSL/TLS, • Construct a mechanism for detecting intrusions (IDS).
Man-in-the-Middle Attacks
In a person-in-the-middle attack, an attacker secretly is based on parties. In this case, listening in at the information can help the attacker in converting the message. Additionally, the attacker may change message relays. 15 If the communication channel is additionally infiltrated, this type of attack can happen during an ongoing conversation in order to get sensitive information being yielded. 16 Figure 12 shows a graphic representation of this assault. Depending on the vulnerability point used, the present IT protection architecture, and customers' attention of capability IT safety dangers, detecting guy-in-the-middle attacks might be
Botnet Attacks
A network of infected computers under the collective management of hackers carries out destructive actions in this kind of attack on the cloud. 17 Botnet spread by way of vigorously scanning the computer systems or network devices for weaknesses while going through a listing of IP addresses. User networks, companies, and customers are all seriously at risk from Botnet. 18 Botnets can use a user's community to carry out harmful actions such as distributed denial of service (DDoS), spamming, data theft, and phishing attacks. They take advantage of today's sophisticated cloud computing platform. A bot master can also create Botnet using cloud services. Cloud-based Botnet, also known as botcloud, may get online quickly and continue to work uninterruptedly. Attackers launch challenging to stop or even notice attacks using Botnet. 19 which makes it one of the attacks that causes the victim the most harm. Figure 13 displays a graphic representation of the Botnet attack. Nowadays, there are many Botnet on the cloud, making prevention essential yet challenging. In order to take advantage of security flaws and vulnerabilities, Botnet are continually evolving. As a result, every Botnet may differ greatly from one another. Botnet operators are well aware of how difficult it is for bot defence solutions to reliably display screen out malicious requests for access to websites and APIs while also granting access to legitimate requests from clients or partners the greater IP addresses and devices they use in their attacks. Advanced detection skills are needed to recognize and stop Botnet attacks. The following are a few of them.
• Update the software, • Carefully watch the network, • Keep a record of failed login attempts.
Security Service
In Table 5, many services utilized to safeguard our data are included together with an appropriate illustration of each sort of protection. The different security types are non-repudiation, authentication, confidentiality, integrity, and availability. 10
Security Concerns
Major cloud security risks and challenges are identified and discussed in this section. Something terrible that could happen to digital assets stored in the cloud is referred to as a security issue in the cloud. Data, software, infrastructure, client trust, and organizational reputation are some examples of these assets. 13 This study divides security concerns into the following four groups (1) data security concerns, (2) network and service security concerns, (3) application security concerns, and (4) security concerns involving people. This classification was created with the most recent trends in assaults on cloud computing platforms in mind. Table 6 provides a quick summary of each category, while Figure 16 provides a summary.
C1 Issues with Data Security
Numerous papers and articles [14][15][16] list data security concerns as one of the top ten cloud security threats for 2020. Private personal information is included in both user and company data. Companies have to acquire users' settlement earlier than retaining, collecting, or making use of their non-public facts according with the general statistics safety law (GDPR) policies. The 2019 Chinese release of cyber security classified protection 2.0 also addressed data backup, recovery indications, data secrecy, and data integrity.Utilizing user data that is stored at many locations during processing and shared with stakeholders as and when necessary, cloud computing is a special form of data sharing. Therefore, SaaS, PaaS, and IaaS consumers have serious concerns about data security in the cloud. Data security strives to prevent anyone from requesting unauthorized resources and to restrict access to data to those who have properly authenticated. It also allows legitimate users to view, transfer, or modify just the data that they are authorized to it concerns are among the top ten anticipated cloud security threats.
Storage
Most of the time, cloud computing models don't give users access to the data kept in the cloud service provider's data centers. 17 Despite having a control over the digital machines, there's a loss of manage over the statistics garage. Attackers may modify the facts after a user loses command and control even as importing it to the cloud. 18 Furthermore, cloud service providers can copy, manipulate, or modify user data without the user's knowledge. This causes a slew of storage-related security issues. Encryption is used to give users more control over their data, but it is insufficient and has drawbacks.
Location
In cloud computing, data are scattered across several geographical locations and in various formats. It is challenging to pinpoint the exact location of each data point. Additionally, there are laws and regulations for data handling specific to each geographical area that must be adhered to. The area in their data or statistics can also every now and then be required of customers or users, and cloud service carriers may be pressured to present this information.When a user uses a public cloud, useless programmes could be saved there. At order to maintain high accessibility, the cloud provider may additionally forge the data in several locations across different countries. As users relinquish control over their data, there is therefore possibility for exploitation, and worries are likely to arise in the absence of proper information.
Access
To prevent unauthorized access to services and stored data, it is essential to monitor user identification and activity while preserving user data in the cloud. Access controls guarantee that data confidentiality is maintained. Because data owners and data are spread across multiple platforms and locations, it is difficult to manage access and identity controls in cloud computing. Organizations cannot rely solely on their authentication and permission procedures in cloud-based systems.
The resources in the cloud are flexible and change size in response to user demands. When services begin or are restarted in various costing models, service providers' IP addresses are continuously altered. To provide security, a variety of key management systems and encryption methods are used.A quick identity management system should be included in a cloud to track people joining and leaving its resources. There are many problems with identification control and get entry to manage; as an instance, the issue of vulnerable credentials may result in a leisurely rest, insufficient logging and tracking, money owed being locked for the duration of DDoS attacks, useless tenant segmentation, and bad identity control. 19
Integrity
Integrity entails correcting errors in data. The goal of cloud-based systems is to guarantee that the data is stored in entire form and that it accurately and precisely flows into the database through the service. The user(s) or system(s) that created the data should be identified. Data should always have time stamps and be retrievable as needed.
Additionally, the data must be accurate and comprehensive. Data integrity issues are exacerbated in cloud computing environments because users have no control over where their data is saved, who can access it, and how. 11
Privacy Breaches
Considering unencrypted records are stored on a pc that is owned and operated by a person other than the real information owner, records privateness is an difficulty that includes cloud computing. Any cloud statistics breach may want to make sensitive information accessible to customers of other firms that share the identical garage. due to multi-tenancy, purchasers the use of diverse apps on virtual machines may additionally share the same database, [11][12][13][14] and an incident of compromise may have an effect on all users in addition to the one for which it was intended. When examining data privacy incidents, "when, how, and to what extent" are three crucial factors that must be taken into account. 12 When, how, and how much of the released data were made public.
It must implement laws, policies, and procedures to secure personally identifiable information in order to maintain privacy. 11 Any cloud user who accesses sensitive data is not permitted to do so, and the cloud service provider must report the violation as soon as possible. Various privacy concerns arise depending on cloud architectures and settings.
Network and Service Security Issues Associated
This class includes community and carrier-related security issues like account or consultation stealing, virtualization, and issues with multi-tenancy and availability.
Hijacking Account or session
Cloud users can access their data and services via cloud-based technologies. Sessions, as well as user credentials, can be hijacked. Passwords are used by attackers to gain access to cloud service resources, and these credentials and account information are occasionally changed.
An unauthorised person with a password can access the consumer's data, which they can then sell, change, or steal for their own malicious purposes.
Private information may be fabricated or leaked as a result, harming the reputation and integrity of the firm and costing customers or businesses money. If consumers' confidential data is exposed during account hijacking situations, legal repercussions for businesses in sectors like healthcare are also conceivable. 13 These problems can be prevented to some extent by safeguarding credentials, utilizing two-factor authentication, and closely watching operations.
Multi-tenancy
A couple of customers of a cloud supplier can share the equal computational assets, which include software, hardware, offerings, community resources, and statistics, through multi-tenancy in cloud computing. Users of the cloud do share assets, however their records is saved aside. a couple of users can percentage the identical infrastructure, consisting of IaaS, PaaS, SaaS, packing containers, or server less computing, in a multitenant architecture while still keeping their data private and secure. Customer data, for instance, can be stored on the same physical site. All security threats are facilitated by the idea of coexistence and resource sharing between different residents who are strangers to one another. 14 However, collocation or co-tenancy attacks that provide an attacker access to nearby VMs or apps are one way that multi-tenancy might be exploited. Additionally, multitenant systems provide a vulnerability for information leakage because multiple data volumes are set aside for different purposes action
Virtualization
Virtualization technologyis used in cloud computing to efficaciously appoint resources. Customers of the cloud can also purchase resources using a pay-consistent with-use pricing scheme. They select the resources they need, like as CPUs, RAM, bandwidth, or operating systems, and only pay for the goods and services they actually use.
The system can suffer from multiple security flaws and numerous new security threats can emerge thanks to virtualization technologies. Since virtualization increases the density of connections and the number of entry points, it leaves environments vulnerable to all forms of attacks for various infrastructures. 15
Availability
Cloud system availability is essential. The delivery of the service should be on demand, according to cloud service providers. Because of the critical services that most businesses provide, any carrier interruption can result in loss and, as a result, a loss of patron trust. Assaults like denial of carrier attacks can result in non-availability, wherein all of the resources are used by the attacker and made unavailable to others, inflicting a denial of carrier and gradual get admission to them. Additionally, users of the cloud service who were persuaded by the Botnet have an impact on the accessibility of other providers. Cloud outages, improper use of cloud resources, hardware issues, and insufficient bandwidth allocation are possible causes of unavailability. 16,18 Backup Monitoring data backups is important because it's important to keep track of them and adhere to their security directives to make recovery easier in the event of an unintentional or intentional disaster.
To facilitate speedy recovery in the event of calamities, it must be ensured that all the data is regularly backed up. 16 To assure data availability and make it compliant with security standards to prevent malicious actions like unauthorized access and modification, regular backups of stored data must be made. 16,18 Issues with Application Security Attackers may decide to target cloud apps.
The following list includes some of the problems that cloud applications encounter:
Malware Injection
On account that this attack has grown to be a vast safety problem in cloud systems, cloud configurations for multi-user guide want to be made with caution.Malware infections and data leaks caused by improper cloud setups can harm both the organization's and cloud service provider's entire cloud computing ecosystem. Malware injections are completed by executing embedded code in cloud services that may be offered as SaaS on cloud servers.When an injection like this remains concealed for a long period, it creates a serious problem in the cloud environment. 17 Then, due to its simplicity of execution, this virus multiplies and spreads in cloud environments. Another critical safety difficulty that calls for attention is this one. Some commonplace malware assaults consist of the hyper-name assault, allotted denial of carrier (DDoS), hyper-jacking, VM escape, high, and probe. 18 Other security issues that need to be addressed include malware that lives on virtual machines (VMs), cloud malware synchronizing, and metamorphic engines.
User Interfaces
Users can customize their cloud experience by running applications in the cloud, but doing so poses a serious security risk to the cloud architecture as a whole.
Even many container-based platforms lack out-ofthe-box security management. User interfaces, also known as programming interfaces (APIs), enable developers to create and integrate application forms with the cloud. This interface's purpose is to allow users access to cloud services, but because certain of its APIs grant users access to potentially vulnerable cloud customers' systems, it may also be used improperly. Software services must have the most recent patches installed. The client might not be aware that they were hacked and what information was exposed.
Development Life Cycle
Any security measure used to thwart an attack, whether it's multiple firewalls, modern anti-virus software, logging, port and activity tracking, encryption, or some other security level, could be rendered ineffective if the software is insecure in the first place.In comparison to traditional application development, cloud software development is more complex. The technique of creating software program codes for the cloud provides protection gaps, and common updates might also weaken safety even as accelerating improvement. Cloud application requirements, design, development, and testing need for us to depart from the conventional methods employed in the SDLC 19 and adopt a preventative strategy for vulnerabilities, malicious attacks, and target cloud platforms; PaaS apps in particular require additional attention. Protection problems of the development life cycle consist of the usage of the wrong software program improvement existence cycle (SDLC), relying too closely on programmers, the use of risky reverse engineering strategies, and finding issues after a product has been launched or deployed.
Issues with Security Regarding People
People are typically seen as the weakest link in security. However, since both internal and external parties are involved in cloud-based systems, there are more security risks involving individuals. Trust, human resources, compliance and regulatory requirements, as well as other problems brought on by nefarious insiders, are some of the important difficulties.
Customer Trust
In cloud settings, consumer programs, facts, and infrastructures are controlled by a second or third party and not saved in a single area. Compared to conventional systems, this causes clients to have more trust concerns. The cloud provider is thought to be in charge of configuring the underlying SaaS, PaaS, or IaaS infrastructure and managing its security. When a user expresses concern about the security of sensitive information with a third party, a lack of confidence is felt and questions are raised about everything from the most minor security event to the most common security representation. In a study conducted by 12 in 2015, nearly 74 percent of participants stated they lacked confidence in the cloud's ability to protect their data. Problems that need to be solved in this accept as true with location include agreement management for cloudto-cloud interactions, cloud machine openness, fate sharing, data locality, audit methodologies, and perimeter security.
Legality and Compliance
Compliance, if poorly managed, can pose a serious security risk. Compliance with legal guidelines and rules is critical when using the cloud, storing and transferring records to and from the cloud, and dealing with cloud architecture.Because security and privacy rules and regulations vary from one place to another, compliance in cloud computing environments is a challenging and fairly difficult topic. 11 To control statistics on the cloud, a cloud issuer and consumer need to be aware about relevant policies and guidelines. Compliance includes confirming suitable safeguards for who has get right of entry to cloud belongings, how a great deal get entry to they have got, and how that get entry to is maintained. Through auditing, this is accomplished. Audits are exceedingly difficult because of how young the public cloud infrastructure is. Providers of public clouds do not prioritize ensuring that compliance standards are satisfied.
Other issues in this category include governance, resource mismanagement, and legal issues.
Human Resource
The safety of cloud-hosted human resource systems, as well as safeguarding employee information and essential credentials from being compromised, is of utmost importance. Cloud HR solutions are frequently offered by cloud service providers as SaaS. Companies choose this desire and their payroll management, recruitment, and task control to the cloud due to its broad scope and ease of hiring. While these services are under attack, the organisations' finances and reputation suffer greatly.
Malicious Insider
Cloud-based systems are more vulnerable to social engineering and phishing attacks than traditional systems.Because a system is remotely accessible from the cloud, a malicious person can easily get in once they have access to login credentials or other sensitive information. 19 An internal attack on the group or team might be disastrous. Access by authorized users to damage the cloud environment is extremely harmful, much as unauthorized access. A malevolent insider could be a stockholder, a former or current employee, etc. An otherwise innocent insider may be forced to assist in the initiation of an assault by malicious outsiders who have influence or control over them. 20 They can collect financial information, client accounts, and other private information. Insider assaults are challenging to identify and stop since they would be mistaken for ordinary access, which would not raise an alarm. Applications for logging data can only be utilized to identify an attacker after an attack has already caused damage. The most of enterprises' security problems are due to a lack of cloud standards, poor management of internal access points, and insufficient monitoring.
The cloud computing security reference architecture is based on the NIST cloud computing reference architecture. Figure 17 displays the revised NIST Cloud Computing Security Reference Architecture from . 11
Limitations and Challenges in Cloud
Computing Organizations may now take advantage of cutting-edge cloud infrastructures with better productivity, lower costs, and improved efficiency thanks to cloud computing. There's a want to check how conventional cloud infrastructures are maintained and remedy associated security demanding situations within the contemporary day due to upgrades in 5G, reliable net, smart cellular gadgets, IoT infrastructures, and smart AI-based information analytics systems. 13,14 Provisioning IT resources with cloud-based platforms necessitates little understanding of the underlying architecture. As a result, a company needs little time and expertise to configure the cloud. but, because of the heterogeneous nature of the cloud and the user's constrained know-how of a specific cloud, they will turn out to be with an infrastructure this is at risk of numerous cyber protection problems, this could result in data breaches, denial of service, session hijacking, and other issues.
Confidentiality, Integrity and Availability (CIA)
The upkeep of availability, integrity, and secrecy are among the major difficulties with cloud computing. IoT devices' data collection must be shielded againstunauthorized entry this might lead to the alteration, addition, copy, or removal ofdata. Before uploading the data to the server, it is also crucial to ensure confidentiality.Cloud servers when data transmission must occur through any insecuremedia. 15,16 Application Security and its Aspects A major obstacle and key area of vulnerability in information security is software application security. The various frameworks and application platforms may each have a different set of vulnerabilities. 17,18 Vulnerabilities in the application security element of cloud computing represent a substantial area of concerns. In relation to this, it is worthwhile since the creation of programmes written in a variety of languages by numerous programmers involved millions of lines of code, varying the list of vulnerabilities associated with them Developers may only be in charge of cloud applications in cloud computing. However, the programming and security features cover every part of the application network. 19
COVID 19
In recent years, COVID-19 has become a contagious illness that is primarily spread by air droplets. By way of infected people's coughing, inhaling, and sneezing, these droplets are produced, which then leads to the spread of disease. Employees have encountered other difficult problems in addition to the exterior difficulties faced by cloud consumers. As a result of the dire circumstances, quick decisions have been made, like allowing staff to work from home. Unfortunately, the widespread adoption of the remote working style has necessitated an undue reliance on cloud resources. The policy of remote working is being phased out.
However, if a situation similar to this one occurs soon, it will be difficult for the cloud computing sector.
Limited Computation Resources
Agencies typically don't know in which, how, or how lots records and workload are stored on cloud-based totally structures in current years. Counting on cloud provider carriers for those problems has emerged as crucial. A good way to keep away from provider overall performance degradation (in the case of an boom in call for) or provider over sizing (within the event of a drop in call for), Variable workloads necessitate adjusting service capability to demand. 13 Physically linked and situated systems, IoT data, and networks made network monitoring and logging much easier, and forensic investigation produced more information. 11,12 However, because mirroring uses more bandwidth, cloud service providers charge for it, raising the cost. Meeting all of the demands of a cloud service provider is difficult.When a resource depletion attack happens, the energy used by the traffic-consuming nodes is produced by taking use of a compromised node. These nodes lose energy while attempting to take down the network. The attack is therefore contained at the routing protocol layer.
These form of attacks, in which those abilities, collectively with reminiscence and community bandwidth, are purposefully depleted, are able to affecting computing resources, including the ones inside the cloud. 13 Because the cloud can scale to handle the workload, it is susceptible to attacks like these where the resources are depleted as soon as the attack is begun. Such attacks include exploitation of application communication flaws and volumebased flooding protocol exploitation.
Classification and Security Issue
Since its debut, cloud computing has experienced some security problems. The researcher must still pay attention to some new security risks related to virtualization, multi-tenancy, and various cyber-attacks as a result of evolving technologies and cloud architectures. In a cloud computing context, information assets can be found in a variety of places and formats. As a result, it's essential to categories information assets and manages security concerns in accordance with the corresponding level of classification. As a result, security might be maintained for less money and work. When numerous users and organizations share information, it might be difficult to categories data since one company may value one piece of information more than another.
Modern cloud infrastructures have several facets, which present challenges for security organizations in terms of data duplication, timely threat detection, limited control over data access, and the requirement for regulatory compliance. Additionally, protecting the cloud infrastructure and the data within it from known and unknown cyber attacks across all cloud components is necessary to achieve comprehensive cloud security, which is a difficult challenge.
It may be difficult for cloud service providers to ensure that safeguards against data loss or tampering are in place. Records and applications are stored securely, interfaces are secured, facts is retrieved only by authorized users, and statistics is available when required, making a data breach or data hacking crisis manageable. Controls must be in place for cloud service providers to address these problems. In addition, it is critical that eavesdropping malware is promptly detected by the Botnet. These mats are harder to discover over a cloud than conventional gadgets and are able to doing widespread damage. Records breaches resulting from improper intrusion detection structures with traffic monitoring may want to be addressed. 14,15 The handling of insider threats in cloud computing is another difficulty. An unsolved research issue. Because of the risks and ambiguity in current cloud designs and models, cloud service providers must come up with more creative solutions.
Contracts between clients and suppliers should specifically address these security concerns.
AI and deep learning limitations
Cloud computing services are accessible to anyone with the right credentials online and are not restricted to a single location. Due to the ease of access to online company data via the cloud, it's a popular goal for attackers looking to research the structures, become aware of flaws, and take gain of those weaknesses. It's far vital to become aware of cyber attacks and safety vulnerabilities within the cloud before they create any noticeable damage due to the convergence of cyber protection, AI, and utilizing data and resources given by the cloud.AI and DL give computers the ability to learn from their previous performance of jobs and offer a greater level of intelligence to recognize and detect cyber-attacks.
Unfortunately, a lot of businesses are still unaware of the risks presented to their cloud and the necessity of investing in defense against new cyber attacks.
Ineffective Laws
Companies and cloud provider companies rely on rules and requirements which might be every so often out-of-date and inapplicable. In order to account for the expanding changes and uncertainties brought on by cloud computing and the broad usage of the internet in general, new rules must be drafted rather than depending on those from the past. All parties involved in cloud-based systems must be aware of the inherent dangers associated with cloud computing and the measures taken by users to reduce these risks.Teams building cloud applications must receive comprehensive and needbased security training, something that software development enterprises frequently overlook.
More security vulnerabilities, such as insecure APIs, improperly configured cloud storage, and subpar access control, make it difficult for researchers to come up with workable and affordable solutions.
Organizations must adhere to cloud security requirements to avoid damage to their reputation and financial losses.
Issues with Security Policy
The guidelines known as security policies are the preventative steps implemented to prevent attacks. It is anticipated that in the cloud, the working environment will be protected by security rules or policies without degrading its dependability or performance. 16,17 These protection guidelines also entail some of specific provider-stage agreements (SLAs), antecedent believe, and purchaser management difficulties, and a few regulatory our bodies govern them.
Future Direction
This research effort addresses additional security and privacy concerns related to cloud systems in the IoT. Future cloud computing gadget research might also consciousness on the following regions, safety concerns the maximum latest cloud security models may be studied and their analysis offered. Researchers may also examine contemporary security issues and challenging situations in cloud computing, such as authenticity, encryption, multitenancy, digital machine safety, and how to mitigate these issues. Researchers should be aware of resource sharing in cloud computing infrastructure.
Data Processing: As a result of technological breakthroughs like smart cities, the Internet of Things, and 5G internet, cloud systems will play a larger role in data processing. 18 To achieve comprehensive cloud system security, the cloud infrastructure and data must be safeguarded against various threats.
Secure and Reliable Cloud Environment, In order to create a secure and reliable cloud environment, a number of problems still need to be resolved. Network, software, conversation, net offerings, and data privacy vulnerabilities are among these security concerns.Cloud as a Service The majority of manufacturing businesses in emerging markets now use cloud services, and manufacturing businesses will prioritize cloud services in the future.
Block chain for Secure Cloud Data Emerging cloud security difficulties includeshared pool sources, virtualization, and multi-tenancy, teachers have advanced some of tactics to safeguard cloud logs. Block chain technology with decentralized cloud storage is useful in enhancing cloud data storing methods and data security, and this method safeguards the store data from change and deletion.
Block chain-based Cloud Log Security. Securing cloud logs using block chain is a brand-new area of study. The suggested architecture uses block chain technology to secure cloud logs, making cloud systems impenetrable and boosting users' confidence in a cloud environment.
Authentication Method
Another potential future development in the cloud environment is the block chain-based authentication mechanism for cloud databases. It will be tough for an insider to adjust person login credentials way to block chain era. Insiders are not able to get admission to consumer login credentials when the use of dispensed ledger-based authentication techniques.
Federated Learning for the Cloud
Federated learning is a novel machine learning technique that trains various algorithms on numerous decentralized servers using only local data. Due to bandwidth restrictions, the primary issue in a cloud system is the cost of communication between clients and the cloud server. For attaining robust privacy in cloud computing, federated learning techniques with a high level of organization can be applied.
Cloud Privacy Concerns
Strong privacy protection boosts users' confidence in cloud computing. Advanced optimization techniques can be utilized to improve the effectiveness of algorithm training. These methods will work well against severe collusion and trustworthy but inquisitive servers. Researchers and developers can address a variety of issues related to cloud computing via federated learning, including high communication costs, privacy concerns, statistical heterogeneity, and system heterogeneity.
Conclusion
In the past ten years, adopting cloud technology has changed the game for businesses, organizations, and hackers. Modern cloud architectures, fast internet, and new developments have all created security risks for cloud computing.
The adoption of cloud computing improved an employer's adaptability and scalability, permitting it to stay and competitive inside the ever evolving industrial surroundings. However, it simultaneously rendered their statistics much less relaxed and attackable for some of motives.
The deployment models, cloud architectures, and typical assaults were covered in this article. After that, we divided cloud security concerns into four groups and talked about the problems in each. We also discussed a number of cloud computing concerns that require immediate attention. These difficulties also include the cloud computing-related restrictions that have emerged in the AI and DL fields.
Acknowledgement
The author would like to thank, Lincoln University Malaysia. for their guidance and support to complete this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article. | 15,486.8 | 2022-12-31T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Severe Acute Liver Dysfunction Induces Delayed Hepatocyte Swelling and Cytoplasmic Vacuolization, and Delayed Cortical Neuronal Cell Death
Liver dysfunction is the main cause of hepatic encephalopathy. However, histopathological changes in the brain associated with hepatic encephalopathy remain unclear. Therefore, we investigated pathological changes in the liver and brain using an acute hepatic encephalopathy mouse model. After administering ammonium acetate, a transient increase in the blood ammonia level was observed, which returned to normal levels after 24 h. Consciousness and motor levels also returned to normal. It was revealed that hepatocyte swelling, and cytoplasmic vacuolization progressed over time in the liver tissue. Blood biochemistry also suggested hepatocyte dysfunction. In the brain, histopathological changes, such as perivascular astrocyte swelling, were observed 3 h after ammonium acetate administration. Abnormalities in neuronal organelles, especially mitochondria and rough endoplasmic reticulum, were also observed. Additionally, neuronal cell death was observed 24 h post-ammonia treatment when blood ammonia levels had returned to normal. Activation of reactive microglia and increased expression of inducible nitric oxide synthase (iNOS) were also observed seven days after a transient increase in blood ammonia. These results suggest that delayed neuronal atrophy could be iNOS-mediated cell death due to activation of reactive microglia. The findings also suggest that severe acute hepatic encephalopathy causes continued delayed brain cytotoxicity even after consciousness recovery.
Introduction
The liver is a metabolic organ involved in detoxification, nutritional metabolism, blood volume maintenance, and hormone regulation [1]. Hepatic encephalopathy (HE) is the leading cause of liver failure and is frequently associated with the progression of end-stage liver disease [2]. Furthermore, liver failure is closely associated with the development of intracranial hypertension, which clinically manifests as cerebral edema and has been suggested to play a crucial role in HE development. HE ranges from minimal dysfunction to coma, resulting from ammonia-related neurotoxicity, where metabolic dysfunction causes glutamine accumulation, astrocyte swelling, and nitric oxide-induced vasodilation [3]. Increased intracranial pressure, secondary to cerebral edema, is common in patients with liver failure and occurs in 80% of comatose patients [3][4][5]. In patients with acute HE, cerebral edema plays an important role in the associated neurological deterioration. Increased fluid infiltration from blood vessels into the brain increases brain volume and intracranial pressure [2]. Vasogenic edema is primarily caused by the disruption of tight endothelial junctions that make up the blood-brain barrier (BBB) [6]. Perturbation of cell metabolism impairs the sodium and potassium pumps' function in glial membranes surrounding the blood vessels, leading to the accumulation of osmotically active molecules and retention of sodium and water in cells, thus, resulting in swollen glial cells and neuronal cytotoxic Int. J. Mol. Sci. 2023, 24, 7351 2 of 18 edema [6][7][8]. The molecular mechanisms underlying glial cell swelling have not been fully elucidated. However, they are thought to be associated with osmo-or stretch-sensitive intracellular signaling cascades involving [Ca 2+ ]-ion transients, aquaporins (AQPs, primarily Aquaporin4), and volume-regulated anion channels [8][9][10][11]. When glial cell swelling occurs, neuronal atrophy is triggered in the brain. In HE, the primary target of the glial cells affected in the brain are the perivascular astrocytes (glial fibrillary acidic protein; GFAP positive cells), but it is thought that microglia (ionized calcium-binding adapter molecule 1; Iba1 positive cells) that display immune responses are also affected. Accompanied by an immune response, a nitric oxide-producing enzyme that can be induced by cytokines (inducible nitric oxide synthase: iNOS) and inflammatory cytokines (interleukin [IL]-1β, IL-18, CD16, and tumor necrosis factor-α [TNF-α]) play a role in inflammation [12,13]. However, the presence or absence of glial cell degeneration and inflammatory response in the brain in HE has not yet been clarified in detail.
In HE research, few basic studies have investigated pathological changes in the liver, except for reports in the clinical pathology field. Even in such settings, there are some reports on acetaminophen-induced acute liver failure [14]. In this study, N-acetylcysteine had a hepatoprotective effect and prevented neurological complications, such as HE and cerebral edema. Similarly, there is a report on the effect of dihydromyricetin in acute liver failure by immunohistological analysis of liver tissue changes and brain astrocytes [15].
Several studies have reported the mechanism of action and related receptors for cerebral edema in patients with HE. However, there are no reports on a concurrent histopathological and ultrastructural analysis of liver injury and cerebral edema. Furthermore, the cause of late-onset cerebral dysfunction has not been reported, and no time-course analysis has been performed. Therefore, the relationship between brain pathology and pathological changes in the liver remains unclear. This study aimed to use light and electron microscopy for pathological and ultrastructural analysis of the brain and liver in HE. In addition, we performed biochemical data analysis and Western blot analysis to evaluate delayed brain dysfunction after acute HE.
Induction of Hepatic Encephalopathy
This study was designed to investigate whether acute hepatic dysfunction affects the brain. First, the sequential changes in the blood ammonia concentration in each animal following treatment with the ammonia-inducing compound ( Figure 1) were evaluated. The average blood ammonia value recorded was low for the control animals (84.2 ± 31.9 µg/dL) and high for the treated animals at 3 h post-injection (629.6 ± 92.5 µg/dL, p < 0.05) before almost returning to control levels at 24 h post-injection (169.3 ± 89.3 µg/dL).
Histopathological Evaluation of the Hepatic Structural Changes in Response to Hepatic Encephalopathy
Histopathological evaluation of the liver was performed to evaluate the relationship between pathological changes in the brain and liver ( Figure 2). Using the HE staining method, the livers of the control animals were found to be healthy, with normal hepatocytes and sinusoids (Figure 2A). These tissues were also stained with uniform eosin. In both the ammonia treatment groups, eosin staining was significantly reduced (eosinophilic changes: Figure 2B,C). In addition, many hepatocytes exhibited vacuolization and/or swelling, indicating cell death at 24 h after ammonia treatment (arrows in Figure 2C). For quantification analysis, we counted the number of hepatocytes exhibiting swelling and/or cytoplasmic vacuolization; the number of such hepatocytes in the control and 3 h postammonia treatment groups was very low. However, in the 24 h post-ammonia treatment group, the number of hepatocytes exhibiting swelling and/or cytoplasmic vacuolization was significantly increased ( Figure 2J).
Figure 1.
Control, 3H, and 24H indicate the average blood ammonia concentration for the animals in the vehicle treatment control group and the treated animal at 3 and 24 h after ammonia treatment, respectively. Data are expressed as the mean ± standard error. *: p < 0.05 compared with the control value.
Histopathological Evaluation of the Hepatic Structural Changes in Response to Hepatic Encephalopathy
Histopathological evaluation of the liver was performed to evaluate the relationship between pathological changes in the brain and liver ( Figure 2). Using the HE staining method, the livers of the control animals were found to be healthy, with normal hepatocytes and sinusoids ( Figure 2A). These tissues were also stained with uniform eosin. In both the ammonia treatment groups, eosin staining was significantly reduced (eosinophilic changes: Figure 2B,C). In addition, many hepatocytes exhibited vacuolization and/or swelling, indicating cell death at 24 h after ammonia treatment (arrows in Figure 2C). For quantification analysis, we counted the number of hepatocytes exhibiting swelling and/or cytoplasmic vacuolization; the number of such hepatocytes in the control and 3 h post-ammonia treatment groups was very low. However, in the 24 h post-ammonia treatment group, the number of hepatocytes exhibiting swelling and/or cytoplasmic vacuolization was significantly increased ( Figure 2J).
Periodic acid-Schiff (PAS) staining was normal in control livers ( Figure 2D). Like HE, a decrease in PAS staining was observed in both ammonia-treated groups; however, intrahepatic fibrosis did not occur ( Figure 2E,F). In the animal's liver of the 24 h post-treatment group, uneven staining was observed in hepatocytes, indicating that the hepatocytes were damaged (arrows in Figure 2F). As hemosiderin deposition (iron deposition) occurs with continued liver injury, iron staining was performed (Berlin-blue iron staining). Normal liver tissue was negative for hemosiderin ( Figure 2G). A low number of iron positives hepatocytes was detected in the livers of ammonia-treated animals (3 h post-treatment) (red arrows in Figure 2H). In contrast, several iron-positive sites were seen in the animals 24 h post-ammonia treatment (red arrows in Figure 2I). For quantification analysis, we counted the iron-positive hepatocytes. The number of iron-positive hepatocytes in the control and 3 h post-ammonia treatment groups was very low. However, in the 24 h postammonia treatment group, the number of iron-positive hepatocytes was significantly increased ( Figure 2K). Periodic acid-Schiff (PAS) staining was normal in control livers ( Figure 2D). Like HE, a decrease in PAS staining was observed in both ammonia-treated groups; however, intrahepatic fibrosis did not occur ( Figure 2E,F). In the animal's liver of the 24 h post-treatment group, uneven staining was observed in hepatocytes, indicating that the hepatocytes were damaged (arrows in Figure 2F). As hemosiderin deposition (iron deposition) occurs with continued liver injury, iron staining was performed (Berlin-blue iron staining). Normal liver tissue was negative for hemosiderin ( Figure 2G). A low number of iron positives hepatocytes was detected in the livers of ammonia-treated animals (3 h post-treatment) (red arrows in Figure 2H). In contrast, several iron-positive sites were seen in the animals 24 h post-ammonia treatment (red arrows in Figure 2I). For quantification analysis, we counted the iron-positive hepatocytes. The number of iron-positive hepatocytes in the control and 3 h post-ammonia treatment groups was very low. However, in the 24 h post-ammonia treatment group, the number of iron-positive hepatocytes was significantly increased ( Figure 2K).
Using scanning electron microscopy, we examined the pathological changes in the hepatic sinusoid. The sinusoidal endothelium of control (normal) mice showed a healthy structure ( Figure 3A), while the ultrastructure of the sinusoidal endothelium of the 3 h post-ammonia treatment mice showed moderate dilatation of the sinusoidal capillaries ( Figure 3B). The ultrastructure of the sinusoidal endothelium of the 24 h post-ammonia treatment mice showed significant dilatation of the sinusoidal capillaries ( Figure 3C) and swelling of sinusoidal fenestrations (yellow arrows in Figure 3C).
Using scanning electron microscopy, we examined the pathological changes in the hepatic sinusoid. The sinusoidal endothelium of control (normal) mice showed a healthy structure ( Figure 3A), while the ultrastructure of the sinusoidal endothelium of the 3 h post-ammonia treatment mice showed moderate dilatation of the sinusoidal capillaries ( Figure 3B). The ultrastructure of the sinusoidal endothelium of the 24 h post-ammonia treatment mice showed significant dilatation of the sinusoidal capillaries ( Figure 3C) and swelling of sinusoidal fenestrations (yellow arrows in Figure 3C).
Biochemical Analyses
Various biochemical parameters of the different treatment groups were evaluated using commercial kits (Table 1). These evaluations revealed that the total protein (TP) content was significantly decreased while aspartate aminotransferase (AST) significantly increased at 3 h post-ammonia treatment compared to the control group. In addition, evaluations of the 24 h post-ammonia treatment animals revealed that while TP returned to control levels, AST levels remained elevated. In contrast, alanine aminotransferase (ALT) levels were significantly increased, while both albumin (ALB) and alkaline phosphatase (ALP) values showed an increasing trend, but no significant change was observed. Control, 3H, and 24H indicate the average blood chemistry values from the vehicle treatment group and the samples taken from animals 3 and 24 h following ammonia treatment, respectively. Data are expressed as mean ± standard deviation. TP-total protein; ALB-albumin; AST-aspartate aminotransferase; ALT-alanine aminotransferase; ALP-alkaline phosphatase; * p < 0.05, compared to the control.
Histopathological Evaluation of the Changes in the Cerebral Cortex in Response to Hepatic Encephalopathy
Histopathological evaluation of the changes in the cerebral cortices was conducted among animals with encephalopathy. First, we examined the water content in the cerebral cortex ( Figure 4). Compared to the control mice (76.8 ± 1.4%), the treated animals showed significantly increased water content in the cerebral cortex (78.8 ± 2.1%; p < 0.05) 3 h after ammonia treatment. Similarly, the cerebral cortex's water content in the 24 h post-
Biochemical Analyses
Various biochemical parameters of the different treatment groups were evaluated using commercial kits (Table 1). These evaluations revealed that the total protein (TP) content was significantly decreased while aspartate aminotransferase (AST) significantly increased at 3 h post-ammonia treatment compared to the control group. In addition, evaluations of the 24 h post-ammonia treatment animals revealed that while TP returned to control levels, AST levels remained elevated. In contrast, alanine aminotransferase (ALT) levels were significantly increased, while both albumin (ALB) and alkaline phosphatase (ALP) values showed an increasing trend, but no significant change was observed. , and 24H indicate the average blood chemistry values from the vehicle treatment group and the samples taken from animals 3 and 24 h following ammonia treatment, respectively. Data are expressed as mean ± standard deviation. TP-total protein; ALB-albumin; AST-aspartate aminotransferase; ALT-alanine aminotransferase; ALP-alkaline phosphatase; * p < 0.05, compared to the control.
Histopathological Evaluation of the Changes in the Cerebral Cortex in Response to Hepatic Encephalopathy
Histopathological evaluation of the changes in the cerebral cortices was conducted among animals with encephalopathy. First, we examined the water content in the cerebral cortex ( Figure 4). Compared to the control mice (76.8 ± 1.4%), the treated animals showed significantly increased water content in the cerebral cortex (78.8 ± 2.1%; p < 0.05) 3 h after ammonia treatment. Similarly, the cerebral cortex's water content in the 24 h post-treatment animals showed a significant increase compared to the control mice (79.3 ± 1.8%; p < 0.05).
Histopathological changes in the cerebral cortex were evaluated using hematoxylineosin (HE) staining ( Figure 5), and it revealed that the cortical tissues of the control animals were healthy, and the neurons and blood vessels were normal ( Figure 5A). On the contrary, eosin staining was reduced in the cortical tissues of both ammonia treatment groups ( Figure 5B,C), and the perivascular spaces were swollen (arrows in Figure 5B,C). Twenty-four hours post-ammonia administration, the nerve fibers were swollen (red arrowheads in Figure 5C). treatment animals showed a significant increase compared to the control mice (79.3 ± 1.8%; p < 0.05). Histopathological changes in the cerebral cortex were evaluated using hematoxylineosin (HE) staining ( Figure 5), and it revealed that the cortical tissues of the control animals were healthy, and the neurons and blood vessels were normal ( Figure 5A). On the contrary, eosin staining was reduced in the cortical tissues of both ammonia treatment groups ( Figure 5B,C), and the perivascular spaces were swollen (arrows in Figure 5B,C). Twenty-four hours post-ammonia administration, the nerve fibers were swollen (red arrowheads in Figure 5C).
Nissl staining was performed to identify pathological changes in neuronal cell bodies. The cortical neurons of normal animals were normal ( Figure 5D). The dyeing property of Nissl decreased slightly 3 h post-administration and remarkably decreased 24 h post-administration ( Figure 5E,F). For quantification analysis, the intensity of Nissl-stained neurons was measured ( Figure 5G). The intensity of Nissl-stained neurons in the control and 3 h post-ammonia treatment groups was very high. However, in the 24 h post-ammonia treatment group, the number of atrophying neurons was significantly decreased. Moreover, many atrophying neurons in the cerebral cortex were identified (black arrowheads in Figure 5F). For quantification analysis, we counted the number of atrophying neurons, which were very low in the control and 3 h post-ammonia treatment groups. However, in the 24 h post-ammonia treatment group, the number of atrophying neurons was significantly increased ( Figure 5H). The structural changes observed in the optical microscope were further examined in detail using a transmission electron microscope ( Figure 6). First, regarding perivascular pathological changes, perivascular edema was observed in both ammonia-treated groups compared with the control group (blue arrowheads in Figure 6B,C). However, no damage was observed in the basal membrane (black double arrowheads in Figure 6A-C) or vascular endothelial cells. Second, regarding pathological changes in neuronal cell bodies of neurons, many intracellular organelles, such as the rough endoplasmic reticulum (black Nissl staining was performed to identify pathological changes in neuronal cell bodies. The cortical neurons of normal animals were normal ( Figure 5D). The dyeing property of Nissl decreased slightly 3 h post-administration and remarkably decreased 24 h postadministration ( Figure 5E,F). For quantification analysis, the intensity of Nissl-stained neurons was measured ( Figure 5G). The intensity of Nissl-stained neurons in the control and 3 h post-ammonia treatment groups was very high. However, in the 24 h post-ammonia treatment group, the number of atrophying neurons was significantly decreased. Moreover, many atrophying neurons in the cerebral cortex were identified (black arrowheads in Figure 5F). For quantification analysis, we counted the number of atrophying neurons, which were very low in the control and 3 h post-ammonia treatment groups. However, in the 24 h post-ammonia treatment group, the number of atrophying neurons was significantly increased ( Figure 5H).
The structural changes observed in the optical microscope were further examined in detail using a transmission electron microscope ( Figure 6). First, regarding perivascular pathological changes, perivascular edema was observed in both ammonia-treated groups compared with the control group (blue arrowheads in Figure 6B,C). However, no damage was observed in the basal membrane (black double arrowheads in Figure 6A-C) or vascular endothelial cells. Second, regarding pathological changes in neuronal cell bodies of neurons, many intracellular organelles, such as the rough endoplasmic reticulum (black arrowheads in Figure 6D) and mitochondria (red arrowheads in Figure 6D), were confirmed in normal neurons ( Figure 6D). A decrease in the rough endoplasmic reticulum (black arrowheads in Figure 6E) and degeneration of mitochondria (red arrowheads in Figure 6E,F) were observed in both ammonia-treated groups ( Figure 6E,F). arrowheads in Figure 6D) and mitochondria (red arrowheads in Figure 6D), were confirmed in normal neurons ( Figure 6D). A decrease in the rough endoplasmic reticulum (black arrowheads in Figure 6E) and degeneration of mitochondria (red arrowheads in Figure 6E,F) were observed in both ammonia-treated groups ( Figure 6E,F). After ammonia administration, various degenerative findings were observed in the cerebral cortex of the 24 h group (Figure 7). Perineural glial cells (astrocytes), with reduced organelles and mitochondrial degeneration (red arrowheads in Figure 7A,B), were observed (arrow in Figure 7A). Swelling of nerve fibers was confirmed through optical microscopy, but similar swelling was also confirmed via electron microscopic analysis (Figure 7C,D), and degeneration of mitochondria within the fibers was also observed (red arrowheads in Figure 7C,D). In addition, several neuronal cell deaths were observed (blue arrowheads in Figure 7E,F), which were considered to be caused due to ammonia's toxic effect. In addition, several large vacuoles and edema spaces were observed around neuronal cell death sites, suggesting that damaged astrocytes surrounding neurons may cause neuronal cell death. After ammonia administration, various degenerative findings were observed in the cerebral cortex of the 24 h group (Figure 7). Perineural glial cells (astrocytes), with reduced organelles and mitochondrial degeneration (red arrowheads in Figure 7A,B), were observed (arrow in Figure 7A). Swelling of nerve fibers was confirmed through optical microscopy, but similar swelling was also confirmed via electron microscopic analysis ( Figure 7C,D), and degeneration of mitochondria within the fibers was also observed (red arrowheads in Figure 7C,D). In addition, several neuronal cell deaths were observed (blue arrowheads in Figure 7E,F), which were considered to be caused due to ammonia's toxic effect. In addition, several large vacuoles and edema spaces were observed around neuronal cell death sites, suggesting that damaged astrocytes surrounding neurons may cause neuronal cell death.
Brain Inflammatory Response after Hepatic Encephalopathy
Western blotting was used to examine whether neuronal atrophy in the brain was due to inflammatory reactions. First, iNOS, which is related to neuronal atrophy, was examined for changes from the onset of HE ( Figure 8A). The optical density of the bands increased over time and remained significant even after seven days ( Figure 8D). Since perivascular astrocyte swelling was remarkable in HE, we decided to investigate the astrocytes ( Figure 8B). Using an anti-GFAP antibody (astrocyte marker) [16], we found that the amount of GFAP peaked at 3 h to 24 h after induction of HE and showed a transient increase, after which there was no significant difference from the control value ( Figure 8B,E). As glial cells cause inflammation of nerve cells, we then examined microglia using anti-Iba1 antibody (microglial marker) ( Figure 8C) [17]. The amount of Iba1 protein showed a pattern similar to that of iNOS expression. In summary, the optical density of the bands increased over time and remained significant even after seven days ( Figure 8F).
Brain Inflammatory Response after Hepatic Encephalopathy
Western blotting was used to examine whether neuronal atrophy in the brain was due to inflammatory reactions. First, iNOS, which is related to neuronal atrophy, was examined for changes from the onset of HE ( Figure 8A). The optical density of the bands increased over time and remained significant even after seven days ( Figure 8D). Since perivascular astrocyte swelling was remarkable in HE, we decided to investigate the astrocytes ( Figure 8B). Using an anti-GFAP antibody (astrocyte marker) [16], we found that the amount of GFAP peaked at 3 h to 24 h after induction of HE and showed a transient increase, after which there was no significant difference from the control value ( Figure 8B,E). As glial cells cause inflammation of nerve cells, we then examined microglia using anti-Iba1 antibody (microglial marker) ( Figure 8C) [17]. The amount of Iba1 protein showed a pattern similar to that of iNOS expression. In summary, the optical density of the bands increased over time and remained significant even after seven days ( Figure 8F).
Discussion
Ammonia is normally produced in the gastrointestinal tract via the breakdown of proteins and amino acids by host and bacterial enzymes. Ammonia then enters the portal circulation and is metabolized in the liver, urea and glutamine before being excreted. When the liver malfunctions, blood ammonia levels increase, leading to an influx of ammonia into astrocytes, thus increasing their volume and inducing cerebral edema [18]. The normal human blood ammonia concentration is 30-80 µg/dL (9-35 µmol/L), and the reference range for blood ammonia concentration decreases with age [19,20]. Arterial ammonia concentrations above 340 µg/dL (200 µmol/L) increase the risk of cerebral edema and coma [19,20]. Individuals with elevated blood ammonia levels often present as comatose patients or patients with cerebral edema [20]. Previous acute-phase studies using animals show a transient increase in blood ammonia levels after ammonia administration [21,22]. Urea metabolism and urinary excretion also increase with increased blood ammonia concentration [22]. This metabolism is triggered in parallel with blood ammonia levels, as well as reactions during food intake [23]. In this study, blood ammonia concentration increased transiently, and mice fell into a coma; however, the mice woke up from the coma in a few hours with recovered motor function. Urinary metabolism is also considered a factor in the relationship between the coma state and blood ammonia level.
Several methods have been devised to study HE. Based on previous reports using ammonium acetate [24][25][26][27][28], a simple and accurate animal model was devised to exhibit elevated blood ammonia levels. Our mouse model was used to pathologically investigate how elevated blood ammonia levels affect the brain. In this study, after two episodes of ammonia treatment, the mice fell into a coma for several hours; however, after the ammonia treatment, the mice woke up within 6 h, returned to normal, and exhibited motor function without a single death. This conditional change in mice correlated with transient increases in blood ammonia levels (Figure 1). These treatments induced a transient decrease
Discussion
Ammonia is normally produced in the gastrointestinal tract via the breakdown of proteins and amino acids by host and bacterial enzymes. Ammonia then enters the portal circulation and is metabolized in the liver, urea and glutamine before being excreted. When the liver malfunctions, blood ammonia levels increase, leading to an influx of ammonia into astrocytes, thus increasing their volume and inducing cerebral edema [18]. The normal human blood ammonia concentration is 30-80 µg/dL (9-35 µmol/L), and the reference range for blood ammonia concentration decreases with age [19,20]. Arterial ammonia concentrations above 340 µg/dL (200 µmol/L) increase the risk of cerebral edema and coma [19,20]. Individuals with elevated blood ammonia levels often present as comatose patients or patients with cerebral edema [20]. Previous acute-phase studies using animals show a transient increase in blood ammonia levels after ammonia administration [21,22]. Urea metabolism and urinary excretion also increase with increased blood ammonia concentration [22]. This metabolism is triggered in parallel with blood ammonia levels, as well as reactions during food intake [23]. In this study, blood ammonia concentration increased transiently, and mice fell into a coma; however, the mice woke up from the coma in a few hours with recovered motor function. Urinary metabolism is also considered a factor in the relationship between the coma state and blood ammonia level.
Several methods have been devised to study HE. Based on previous reports using ammonium acetate [24][25][26][27][28], a simple and accurate animal model was devised to exhibit elevated blood ammonia levels. Our mouse model was used to pathologically investigate how elevated blood ammonia levels affect the brain. In this study, after two episodes of ammonia treatment, the mice fell into a coma for several hours; however, after the ammonia treatment, the mice woke up within 6 h, returned to normal, and exhibited motor function without a single death. This conditional change in mice correlated with transient increases in blood ammonia levels (Figure 1). These treatments induced a transient decrease in the total protein levels, which returned to normal after 24 h. (Table 1). In addition, ALB levels did not change. Therefore, these animals may have maintained blood osmolarity. Furthermore, as we have previously reported [29], the osmotic pressure of the reagents and fixatives used in this study is within the normal range, indicating that the pathological changes are not due to side effects.
Histological analysis of the liver ( Figure 2) and biochemical analysis of the blood (Table 1) were performed in the same individuals. Both HE and PAS staining, and hemosiderin reactions were normal in the normal liver. Conversely, ammonia administration significantly decreased eosin staining in the HE-stained images. This result suggests that the cytoplasm of hepatocytes was damaged. A remarkable image was also found in individuals of 24 h postammonia treatment, suggesting that hepatocytes exhibiting vacuolization and swelling had progressed. Although the PAS-stained image showed a decrease in stainability, liver fibrosis was not observed. Furthermore, despite many positive structures in the hemosiderin reaction, no significant iron accumulation was observed, and the degeneration findings were not dramatic. This correlates with the significant increase in liver marker values in the blood biochemical data of 24 h animals group after the blood ammonia concentration decreased. Furthermore, scanning electron microscopy analysis results also indicate that sinusoids are gradually damaged (Figure 3), suggesting that delayed hepatocyte deterioration is induced during recovery after acute HE.
In addition to the liver analysis, macroscopic analysis of the cerebral cortex (water content measurement) showed a significant increase in water content in the ammoniatreated group, indicating the presence of cerebral edema (Figure 4). Similarly, HE staining ( Figure 5) also revealed cerebral edema in these animals, which included swelling of the tissue around the blood vessels in the cerebral cortex. This result is consistent with that of a previous study [20]. The HE-stained images of the cerebral cortex, taken 24 h postammonia treatment, showed persistent perivascular swelling, consistent with the results of water content measurements. Furthermore, the microscopic analysis using a transmission electron microscope (Figures 6 and 7) showed swelling around blood vessels and astrocytes for the ammonia group. Previous studies on HE have reported swelling around the blood vessels that make up the BBB of the brain's cerebral cortex. AQP4 is closely associated with astrocytic activity in these vessels [30][31][32][33][34][35][36]. This study detected similar swelling around blood vessels in the brain's cerebral cortex following ammonium acetate administration, suggesting swelling due to AQP4. Aquaporin4 (AQP4), found in the brain, is a bidirectional transmembrane water channel specifically localized in the terminal feet of astrocytes. AQP4 may play an important role in the exacerbation and resolution of traumatic cytotoxic brain edema. Physiologically, AQP4 associates with the inwardly rectifying K + channel Kir4.1 and colocalizes with AQP4 on the terminal feet of rodent astrocytes. Thus, it clears water via AQP4 and K + via Kir4.1, with high neuronal activity from the extracellular compartment to neighboring astrocytes [37]. In pathophysiological situations, AQP4 alone also transports water across cell membranes in both directions along osmotic gradients [38]. Deletion of AQP4 has been reported to reduce cerebral edema [31]. Many studies have suggested that perivascular astrocyte swelling is mediated by astrocytic AQP4 in HE. Various studies have been conducted on how ammonia affects astrocytes through various mechanisms, one of which suggests that AQP4 acts as an ammonia channel, implying that blood ammonia directly enters the astrocytes through AQP4 [39]. Glutamine is produced in the presence of glutamine synthetase from ammonia taken up by astrocytes. A large amount of glutamic acid is taken up by neurons, disturbing the neurotransmitter balance, and thus inducing neuronal dysfunction. It has been suggested that mitochondrial dysfunction in neurons, caused by altered neurotransmission alone, may lead to neuronal atrophy [40,41].
The Nissl staining results confirmed a significant decrease in nerve cell staining after ammonia administration, especially in treated animals 24 h later ( Figure 5). Nissl staining strongly stained the rough endoplasmic reticulum in the nerve cell body, suggesting a decrease in the rough endoplasmic reticulum in the nerve cell body of post-treated animals (24 h post-treatment). Therefore, microscopic analysis was performed using a transmission electron microscope (Figures 6 and 7). As a result, a gradual decrease in rough endoplasmic reticulum was confirmed at 3 h post-treatment of ammonia, and a marked decrease was observed at 24 h. Furthermore, several abnormal mitochondria were observed, suggesting that nerve cells were greatly affected. The ultrafine morphological observations confirmed various aspects of nerve cells in animals 24 h after ammonia administration (Figure 7). Abnormal mitochondria were observed not only in the nerve cell bodies but also in the neurites. The mitochondrial permeability transition is a Ca 2+dependent process characterized by the opening of permeability transition pores present in the inner mitochondrial membrane. Opening the pore results in increased permeability to protons, ions, and other solutes of ≤1500 Da, leading to a collapse of the mitochondrial inner membrane potential, ultimately resulting in decreased oxidative phosphorylation and bioenergetic failure [42]. The induction of mitochondrial permeability transition can also lead to secondary oxidative stress [43,44]. It has been reported that the administration of ammonia in cultured astrocytes induces mitochondrial permeability transition [45]. The mitochondrial permeability transition was also induced in the HE rat model [46], suggesting that the results of the observation of abnormal mitochondria in this study confirmed the biochemical data obtained using ultra-microstructure analysis. Moreover, ammonia-induced mitochondrial dysfunction has been reported to affect brain energy metabolism [47][48][49]. The reduction and abnormal morphology of the mitochondria in neurons that we confirmed in this study suggest the possibility that the energy metabolism of neurons themselves is triggered. In addition, many cells undergo degeneration and cell death. Several vacuoles and edematous spaces were found around dead neurons, and most astrocytes around the blood vessels were swollen in ammonia-treated groups. In contrast, astrocytes around live neurons showed a normal structure in animals 24 h after ammonia administration. It remains to be determined whether this result is due to the distance from the blood vessels, the astrocytes around the blood vessels exposed to high concentrations of ammonia, and the distant location of astrocytes, or whether it is due to functional differences in astrocytes [16,[50][51][52][53]. Therefore, it is necessary to consider this issue in future studies.
Brain inflammation is known to be involved in neuronal atrophy. Inflammatory cytokines (iNOS, IL-1β, IL-18, CD16, and TNF-α) play a role in inflammation [12,13]. We investigated the possibility that inflammatory cytokines caused delayed neuronal atrophy after the decrease in blood ammonia level observed in this study ( Figure 8). We analyzed the expression changes of iNOS, which is thought to be strongly involved in neuronal atrophy. A further increase in iNOS expression was confirmed 24 h after the decrease in blood ammonia level, and the increased expression was maintained even after seven days. This result suggests that iNOS is involved in delayed neuronal atrophy. We also investigated the activation of astrocytes to examine the relationship with iNOS. After induction of HE, GFAP expression transiently peaked at 3 h to 24 h, but then decreased sharply and returned to normal values. In addition, we observed the Iba1 protein (microglial marker), which peaked 24 h after the induction of HE, and the increased expression was maintained even after seven days. These results suggest that HE first triggers swelling of astrocytes in the brain, followed by activation of reactive microglia, which consequently increases iNOS expression. These morphological changes and increased expression of various inflammatory cytokines are thought to induce delayed cytopathic atrophy in neurons. On the other hand, it has been reported that there is no elevation of GFAP in the HE brain [54]. This difference is thought to be due to the blood ammonia concentration. It is believed that severe symptoms of high blood ammonia concentration have a profound effect on glial cells in the brain.
Coma and cerebral edema occur in almost all patients when blood ammonia levels increase as liver disease progresses, including HE [55][56][57][58]. HE is diagnosed comprehensively by distinguishing it from other diseases based on abnormal liver function; the presence or absence of a history of liver disease; neuropsychiatric symptoms, such as disturbance of consciousness, hyperammonemia, electroencephalogram abnormalities, and image inspection; and biochemical examination results. After the diagnosis of hepatic encephalopathy, removal of toxic substances, mainly ammonia, and correction of metabolism of amino acids, etc., are performed from an early stage. Thus, although human patients rarely reach the high levels of ammonia described in this study, an increasing number of patients with various liver dysfunctions, including HE, was reported [59][60][61][62][63][64]. Patients often present with pathological conditions, such as liver cirrhosis, and it is thought that in vivo histopathological examination of the liver is necessary. However, histopathological examination using liver biopsy is rarely performed at the time of examination. Therefore, it is increasingly important to understand and assess the precise pathological effects during the acute phase of the conditions on the liver and brain. The findings of this study are expected to contribute to therapeutic methods for delayed cell death in the brain and facilitate the introduction and development of early treatment methods. The results of the current study will serve as the basis for both basic research and clinical application.
Study Design
We used 78 C57BL/6J apparently healthy male mice (10 weeks old; Charles River, Yokohama, Japan). The mice were housed under temperature and humidity-controlled conditions with a 12:12 h light-dark cycle and free access to food and water. Animal experiments were performed following the National Institutes of Health Guide for the Care and Use of Laboratory Animals, and the protocol was approved by the Laboratory Animal Ethics Committee of Meiji Pharmaceutical University (No. 2704, 1 April 2017-2022). All efforts were made to minimize animal suffering and to reduce the number of animals used in the study.
Inducing Acute Liver Failure
According to our previous study [29], acute liver failure was induced in an animal model. Using this method, it is possible to raise blood ammonia levels to coma-inducing levels without killing the animal. Fifty-seven 10-week-old C57BL/6J mice were treated intraperitoneally with ammonium acetate (4.5 mmol/kg body weight, Sigma-Aldrich, St. Louis, MO, USA) using two intraperitoneal injections with a 15 min interval between injections. Twenty-one age-matched male mice were intraperitoneally injected with saline and used as controls.
Biochemical Analysis
Using an animal restrainer for laboratory mice (CL-4903, CLEA-Japan Co., Tokyo, Japan), blood samples were collected from the tail vein of the unanesthetized animals (10 mice per group) from all groups (3 h and 24 h after ammonium acetate or vehicle injection), and 100 µL of each sample was diluted in an equivalent volume of heparinsupplemented physiological saline. These samples were then centrifuged (1500× g, 10 min, room temperature), and the plasma obtained after centrifugation was quickly cryopreserved at −80 • C for evaluation. The plasma ammonia concentrations were determined using a Cica-liquid NH 3 kit (Kanto Chemical Co., Inc., Tokyo, Japan). TP, ALB, AST, ALT, and ALP concentrations were also determined using the appropriate test kit (Wako Pure Chemical Industries, Ltd., Tokyo, Japan).
For optical microscopy, brain and liver tissues were collected from animals in the control and treatment groups (six mice per group). They were fixed with 4% paraformaldehyde in 0.1 M phosphate buffer (PB) (pH 7.4). The brains and medial lobe of the liver were sliced into 2 mm thick sections using a slicer. The sections were immersed in graded concentrations of ethanol, cleared with Lemosol A (Wako Pure Chemical Industries, Ltd., Tokyo, Japan), and embedded in paraffin.
For transmission electron microscopy, brain tissues were collected from animals in both control and treatment groups (4 mice per group). They were fixed with 4% paraformaldehyde and 1% glutaraldehyde in 0.1 M PB (pH 7.4). Brains were sliced into 1 mm thick sections using a slicer. Sections were immersed in osmium tetroxide (TAAB Laboratories, Ltd., Aldermaston, UK) for 2 h, dehydrated in ethanol, and embedded in Epon-812 resin (TAAB Laboratories, Ltd., Aldermaston, UK).
For scanning electron microscopy, the medial lobe of the liver tissue was collected from animals of both control and treated groups (4 mice per group). The livers from each group were fixed with 4% paraformaldehyde and 2% glutaraldehyde in 0.1 M PB (pH 7.4). After washing with PB and cutting, the liver blocks were immersed in osmium tetroxide solution, substituted with 50% DMSO, cracked in liquid nitrogen, dehydrated through graded concentrations of ethanol, and substituted with 2-Methyl-2-propanol. All the cracked blocks were freeze-dried and sputter-coated with gold.
Brain Water Measurement
Cerebral cortices from each group (6 mice per group) were quickly cut into 2 mm thick slices, flash-frozen in liquid nitrogen, and stored at −70 • C for 48 h. Water content was measured gravimetrically using a density gradient of bromobenzene-kerosene (Fisher Scientific, Pittsburgh, PA, USA) pre-calibrated with potassium sulfate (K 2 SO 4 ) as previously described [14,69]. Cortical sections were placed on the fluid column, and the equilibration point was measured after 2 min. Four to five measurements were performed per animal, and the values were arithmetically averaged. The specific gravity of the tissue was calculated, and the results were expressed as a percentage of water content.
Histological Analysis
All staining required ultratome sectioning of paraffin-embedded brain and liver blocks. They were cut into 5 µm thick sections using a sliding microtome (REM-710; Yamato Kohki Industrial, Tokyo, Japan). All sections were mounted on glass slides, deparaffinized with Lemosol A immersed in graded concentrations of ethanol and distilled water.
For HE staining, sections of the brains and livers were stained with hematoxylin and eosin solutions (Muto Pure Chemicals Co., Ltd., Tokyo, Japan). After washing, the sections were dehydrated using graded concentrations of ethanol and Lemosol A, and a cover slip was placed.
For Nissl staining, the brain sections were stained with 0.1% cresyl violet solution (Abcam, Cambridge, UK). After washing, the sections were dehydrated using graded concentrations of ethanol, Lemosol A, and cover-slipped.
For PAS staining, liver sections were stained using a PAS solution kit (Muto pure chemicals Co., LTD., Tokyo, Japan). After washing, the sections were dehydrated using graded concentrations of ethanol, Lemosol A, and cover-slipped.
For iron staining (Berlin-blue iron staining), the liver sections were stained using an iron staining kit (Muto Pure Chemicals Co., Ltd., Tokyo, Japan). After washing, the sections were dehydrated using graded concentrations of ethanol, Lemosol A, and cover-slipped.
Electron Microscopical Analysis
For transmission electron microscopy, ultrathin sections (70 nm thick) were cut using a Leica EM UC6 Ultramicrotome (Leica Microsystems, Wetzlar, Germany) and placed onto grids (Veco, Eerbeek, The Netherlands). The electron-stained ultrathin sections were examined using a transmission electron microscope (HT7800, Hitachi, Tokyo, Japan), and images were captured using a CCD camera.
For scanning electron microscopy, sputter-coated liver blocks were examined under a scanning electron microscope (S-4700, Hitachi High-Technologies Corporation, Tokyo, Japan). Hepatic sinusoidal structures of control and ammonia-treated mice were randomly taken from the periportal to the centrilobular fenestrae.
Data Analysis
The Western blot images and photo images were analyzed using Image J software (Version 1.54, NIH, Bethesda, MD, USA) and quantified. Statistical analysis was performed using StatView statistical software (Version 5.0, SAS Institute Inc., Cary, NC, USA). Differences were analyzed using analysis of variance, and significance was set at p < 0.05.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,468.2 | 2023-04-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Mitigation of divertor edge localised mode power loading by impurity seeding
One of the major challenges for the design of future thermonuclear reactors is the problem of power exhaust—the removal of heat fluxes deposited by plasma particles onto the plasma-facing components (PFCs) of the reactor wall. In order for the reactor to work efficiently
Introduction
The baseline scenario for burning plasma in ITER will be a high confinement mode (H-mode) [1] discharge with Type I edge localised modes (ELMs), as this regime offers the best performance in terms of Q.However, in presently-operated machines under attached conditions, this confinement mode is associated with large heat fluxes, which when scaled to the ITER device, represent a serious risk to the plasma-facing components (PFCs).
The heat fluxes during inter-ELM periods can be effectively mitigated in the regime of detachment with the aid of intensive fueling or impurity seeding [2].However, the energy deposited during unmitigated Type I ELMs alone is expected to be sufficiently large to cause melting of the divertor tungsten monoblocks-the model for peak ELM energy fluence ϵ || developed by Eich [3] based on multi-machine experimental data set predicts target values between 9.4 (1:1 with respect to model) and 28 MJ m −2 (3:1 with respect to model), while threshold for melting of tungsten divertor is around 3 MJ m −2 [4,5].Therefore, a mitigation of ϵ || by a factor of at least three (to 1/3:1 with respect to the model) is required to remain in safe operation.ELM mitigation and suppression techniques, such as magnetic perturbations [6] allow to adequately reduce the energy deposited by ELMs at the cost of reduced pedestal pressure [7,8], which can compromise the ability to generate high fusion power.
One of the perspective tools for mitigation of ELM energy is the injection of impurities into the edge plasma.Similarly to inter-ELM detachment, the interaction between plasma and the impurity particles should transfer the thermal energy of plasma particles into radiation and as such avoid the deposition of highly localised heat fluxes at the divertor targetsthis effect is dubbed ELM buffering [9].However, previous attempts to achieve such a scenario at JET did not result in an appreciable reduction of deposited energy [9].Moreover, fluid modelling of ITER ELMs in B2-Eirene suggests that significant ELM buffering is not feasible for large ELMs [10].Experiments with impurity seeding at other tokamaks have sometimes resulted in a reduction of the ELM energy impacting at the divertor targets, however this was more likely achieved by a transition from Type I ELMy H-mode into a regime with smaller ELMs [11].
The objective of this work is to report on experimental observations at the ASDEX Upgrade, where significant ELM buffering (up to 80% of dissipated Type I ELM energy) and strong mitigation of the peak ELM energy fluence were observed in experiments with argon, neon and nitrogen seeding.
Scenario overview
The reference scenario for the experiments in the ASDEX Upgrade tokamak was a lower single null Type I ELMy Hmode with plasma current I p = 1.0 MA, toroidal magnetic field B T = −2.5 T (ion B × ∇B drift pointing towards the active x-point) and auxiliary heating P NBI = 10 MW, P ECRH = 2 MW and P ICRH = 1.9 MW.The plasma shape featured a relatively low triangularity (δ l = 0.4, δ u ∼ 0.18) with elongation κ = 1.7.Plasma fueling was typically kept constant (Γ D2 = 1.5 × 10 20 el s −1 ).A series of discharges was executed using either predefined waveforms or real-time feedback systems for the control of impurity injection.In the later case, the real-time measurement of power reaching the divertor P div [12] was used to control the injection of argon and a proxy measurement of electron temperature at the outer divertor target T div (using the divertor shunt currents [13]) [14] was used to regulate the injection of nitrogen.Figure 1 shows an example argon-seeded discharge #38 320 with a pre-defined ramp on P div descending from 8 MW at t = 2.8 s to 4 MW at t = 5.6 s.It can be seen that the electron pedestal pressure p e (using fixed ρ = 0.95) obtained by the integrated data analysis (IDA) diagnostics [15] remains constant at 7.6 kPa (corresponding to H 98 (y, 2) ∼ 1) until t = 5.3 s.
Note that T div measurement is based on the assumption that the SOL current reaching the divertor plates is of thermoelectric origin, driven by the difference in electron temperature at the inner and outer targets.During detachment, this mechanism is significantly weakened (as the temperature is low on both targets) and other SOL current contributions (such as Pfirsch-Schlüter currents) may become dominant.In this case, the proxy can yield negative values and naturally ceases to be related to the electron temperature by means of a linear proportionality.
ELM energy
The power dissipated during ELMs E buf was computed as the difference between (i) the energy released during the ELM ∆W mhd and (ii) the energy deposited at the divertor target (1) While the former was derived from the changes of plasma stored energy W mhd evaluated from a magnetic equilibrium reconstruction code during the course of each ELM, the later was obtained from the measurements of the infrared (IR) camera [16].Since only IR measurements from the outer target were available, the power fraction flowing towards the outer target f o,x = E ELM,outer /E ELM had to be determined: ( Due to diagnostic limitations it was not possible to determine f o,x directly.Instead, it was assumed that in the absence of extrinsic impurities ∆W mhd = E ELM (neglecting the role of intrinsic impurities and the power deposited onto first wall elements) and so the initial phases of discharges were used to determine f out (with typical values 0.6-0.7).Note that it is expected that the determination of the ELM energy using different diagnostics may be prone to systematic errors, which would propagate in this estimate of f o,x , therefore this quantity may not be a good representative of power sharing between the outer and inner divertor targets.It was also assumed that impurity injection does not influence the power sharing between the inner and outer divertor targets during the discharge.
Two methods of analysis of IR data were attempted.The first one used directly the measurements of surface temperature T surf and computed the energy required to achieve a given increase of T surf during an ELM using a simple 1D model of heat conduction [17]: where f wetted is the fraction of tile surface exposed to plasma fluxes (equal to 0.8 for the AUG lower outer divertor) and c v and κ s are heat volumetric capacity and heat conduction of tungsten respectively with weak dependence on temperature, which was approximated as: Since the IR camera measures the entire poloidal profile of T surf the total ELM energy is calculated as a sum of energies corresponding to each pixel.
The duration of the ELM was determined by a combination of the measurements of IR and shunt currents.The shunt current measurements [13] were used to determine the ELM start and ELM end.However, in order to obtain ∆t ELM (which corresponds to the ELM rise time), the time evolution of peak surface temperature measured by the IR was used and ∆t ELM was determined as a difference between time frame of minimum and subsequent maximum temperature within the ELM time window for each pixel separately (as shown in figure 2).The corresponding temperature difference for a given pixel is used as ∆T surf in equation (3).
The second method of ELM energy determination is based on the output of THEODOR code [18], which converts the measurements of surface temperature into time evolution of impacting heat fluxes.The heat flux is then integrated along the target and for the duration of the ELM.
Both methods relied on the detection of ELM start and end times, which were based on measurements of shunt currents.Since these were also affected by impurity seeding, an adaptive threshold method was employed.The ELM was detected when the shunt current flowing in the outer divertor I polsolA exceeded a value of 1.5 × I inter−ELM + 1 kA, where I inter−ELM is the value of inter-ELM shunt current (obtained by rescaling of the T div signal as A comparison of the two methods for estimating the ELM energy is shown in figure 3(a).The agreement was satisfactory in most cases; however, during impurity seeding, in some cases the E ELM,q yielded negative values, which were clearly unphysical due to volumetric radiation increasing the measured photon flux and leading to an overestimation of the surface temperature for some time slices [19].This excess energy leads to an underestimation or even negative heat flux in consecutive time windows.In such cases E ELM,∆T provided larger values of ELM energy, representing a more conservative estimate of ELM buffering.This made E ELM,∆T a preferred quantity for further analysis.
Peak ELM energy fluence
While the ELM energy is an important quantity for considerations of physics processes occurring in intra-ELM SOL plasmas, the magnitude of the peak ELM energy fluence ϵ ||,peak [3] is decisive for the interaction with divertor PFCs.Similarly to the calculation of the ELM energy, ϵ ||,peak can be also obtained (i) using the increment of T surf during ELMs or (ii) by integration of the heat flux obtained by the THEODOR code: where α imp is the angle of impact of the magnetic field onto the divertor target (∼2.7 • ).The agreement between these two methods is shown in figure 3(b).For consistency with ELM energy measurements, ϵ ||,peak,∆T is used for further analysis.
In the following analysis, the ELM energy and energy fluence will be treated from different perspectives, however it is instructive to cross-check their correlations.As shown in figure 4(a), the values of E ELM,∆T and ϵ ||,peak,∆T which are measured by IR at the outer target are reasonably well correlated.It would also be natural to expect that the measurements based on upstream or core measurements (∆W mhd and energy fluence model by Eich ϵ ||,peak,Eich equation ( 8) in [3]) will exhibit a similar level of correlation, however this is clearly not the case, as shown in figure 4(b).One possible explanation for this poor correlation is that ϵ ||,peak,Eich does not capture some hidden parameter.Indeed, previous experimental validations of this model have reported that most measurements are located between 1:1 and 3:1 multiples with respect to the modelapart from the original dataset used by Eich (which included data from AUG, JET and MAST), this behaviour was also confirmed at HL-2A [20], COMPASS [21] and to lesser degree in DIII-D [22].Since the effect of buffering on ELM energy and fluence cannot be characterised jointly due to a weak correlation between ∆W mhd and ϵ ||,peak,Eich , separate analysis for each of these quantities is presented in the following sections.
In order to reduce scatter due to the individual differences between ELMs, data from 10 consecutive ELMs in a given discharge were combined, using the median for the representative value and standard deviation as an estimate of error.
ELM energy proxy
The divertor shunt current measurements I polsolA exhibited a convenient correlation with the ELM energy derived from the IR measurements (even during the impurity seeded phases of the studied discharges).This allowed for a construction of an ELM energy proxy E ELM,proxy , using the integral of this shunt current over the duration of the ELM (as shown in figure 5) The correlation is not coincidental-the shunt current during ELMs occurs due to the thermoelectric effect [23], which is driven by the electron temperature difference between the inner and outer target.The ELM electron temperature is expected to be related to the ELM energy.This proxy allowed to obtain an estimate of ELM buffering even in discharges in which the IR camera did not yield useful data.This has the potential to be used in future real-time systems for the control of impacting ELM energy, similarly to the T div control already used at ASDEX Upgrade.The correlation also indicates that the majority of ELM energy deposited at the outer divertor target is carried by charged particles and not by neutrals or photons.
Characterisation of ELM buffering
An example of ELM buffering measurement is shown in figure 6.In a reference discharge #39 487, where impurity seeding was not applied, the ELM energy measured at the outer target agrees with the energy released from the plasma (using f out ∼ 0.8) and also with the energy estimated using an ELM energy proxy.An argon seeded discharge #38 320 exhibits a large difference between ∆W mhd and E ELM,∆T at t > 4.5 s, indicating significant energy dissipation of ELM energy, however the amount of impurity was not optimal, which caused an unintended increase of ∆W mhd at t = 3.5-4.5 s and also eventually resulted in a disruption at t ∼ 5.5 s.Note that ∆W mhd for highly buffered ELMs was ∼40 kJ, similar to the reference part of discharge prior to seeding, which indicates that also the ELM type was not affected by impurity injection.Finally, when both argon and nitrogen were injected in #39 050, the discharge remained stable with good confinement.
The buffered fraction f buf defined as: reached up to 80% in #38 320.However, at this point, the confinement and pedestal pressure were already degraded.Combined seeding in #39 050 produced f buf ∼ 60%.Similar experiments were executed also with neon seeding, however these had to be accompanied by injection of nitrogen in order to maintain a stable discharge [24].A comparative study using different impurities is summarised in figure 7.All discharges started with similar pre-seeding conditions and ELM frequency around 100 Hz.Despite numerous attempts, it was not possible to achieve significant ELM buffering ( f buf > 40%) with nitrogen seeding only (figure 7(a)).However, injection of nitrogen typically leads to an increase in ELM frequency and reduction of ELM released energy (leading to the smallest ELM energies registered at the divertor target from all studied discharges).Pure argon seeding resulted in a decrease in ELM frequency and can produce large ELM buffering (up to 80%), however this was typically associated with a degradation of confinement (figure 7(b)).Combined nitrogen and argon seeding produces stable discharges with good confinement, high ELM frequency and significant ELM buffering (figure 7(c)).Finally, a combined neon and nitrogen seeding also resulted in significant ELM buffering.Unfortunately, there is only an indirect evidence of it (using ELM energy proxy, square symbols) since there were no useful IR measurements in any of the executed discharges with this impurity.Moreover, neon seems to have a long-lasting deteriorating effect on the plasma confinement, which escaped the simple characterisation used here (black outline for measurements with H 98 (y, 2) > 0.8 and magenta outline for H 98 (y, 2) < 0.8).
ELM cooling model
A simplified model of ELM cooling is constructed in order to predict the magnitude of buffered energy.This model treats the ELM as an assembly of plasma particles having an initial T e = T i equal to the pedestal electron temperature T e,ped .As such, the energy of the ELM is a sum of the energies of all its particles: where Z is the ion mean charge.The absolute number of ELM electrons N ELM can be determined at the point of ELM formation using the knowledge of the ELM released energy ∆W mhd : We assume that impurity cooling is the only loss channel of the ELM and we ignore the transport effects which may impact the cooling.All other interactions with the SOL plasma are also ignored (including the effect of the target sheath [25]) as well as energy transfer between ions and electrons, which was estimated by the free streaming model [26] to account for up to 1 3 of the electron energy (this magnitude of energy transfer was later confirmed experimentally at COMPASS [27] and measurements consistent with these predictions were also reported at JET [28]).In this greatly simplified picture the energy of the ELM changes as: where V ELM is the volume occupied by the ELM particles and P cool is the cooling power due to the action of impurities in this volume.Using equation (10) and assuming the ion temperature remains constant, the change of ELM T e can be expressed as: Due to complex dependence of L z on T e [29] this equation cannot be solved analytically but numerical solution can easily provide time evolution of T e .Using the knowledge of the time ∆t the ELM particles need to travel from upstream towards the target, the energy deposited at the target can be calculated as: Subsequently, the buffered fraction can be expressed as: The maximum buffered fraction predicted by this model (T e (t = ∆t) → 0) is therefore equal to Z Z+1 , ranging between 50% ( Z = 1) and 60% ( Z = 1.5).The predictions of this model using the divertor spectroscopy measurements [30] of argon and nitrogen impurity densities n Ar and n N (assuming fixed Z = 1.1 in absence of direct measurements of this quantity) are shown in figure 8.In discharges #38 320, #39 046, #39 056 the model has good quantitative agreement with the experiment, which is a splendid result, given the fact that the model has no tuning parameters and is therefore fully predictive.However, in other discharges, the model underestimates the amount of buffering by up to factor 50.There are a number of reasons why this could occur, most probably because the location of the impurity concentration measurements is not representative of the impurity concentration used in the model or the omission of ELM interaction with neutrals.The model is non-linearly sensitive to impurity concentration, as demonstrated in figure 9(a), where the ELM cooling model was calculated for a selected ELM in #38 320 using the measured value of n Ar as well as its 0.5 and 1.5 multiple (indicating the uncertainty of the measurement).This is a result of the strongly varying magnitude of L z with T e (see figure 9(b)): as soon as ELM T e cools down to ∼300 eV, the subsequent cooling is greatly accelerated and in some cases the T e is reduced practically to zero.Unfortunately, this strong dependence is interfering with the actual precision of the impurity concentration measurements, which was estimated to be 50%.
However, there are probably other factors influencing the magnitude of buffering, as suggested by a comparison of similar discharges #38 320 and #37 493 (both argon seeded).While the argon density and concentration were systematically lower in #37 493 (see figures 10(a) and (b), the buffered energy devised from IR measurements was actually higher even though the initial ELM energies were similar in both discharges, as shown in figure 10(b).At the time of writing, these factors have not yet been identified.
Peak ELM energy fluence
The peak ELM energy fluence ϵ || was compared to a model developed by Eich et al [3] (see figure 11) where a is the plasma minor radius, κ plasma elongation, ∆ equii a geometrical factor (assuming ∼2.0 for AUG) and B tor /B pol = 4.2 is the ratio of toroidal and poloidal magnetic fields in the outer midplane.The reference discharge #39 487 and the measurements prior to seeding in other discharges are located between 1:1 and 3:1 line, similarly to previously reported measurements at AUG.It can be seen that impurity seeding leads to a significant reduction of ϵ || , eventually reaching the 1:3 line.At this level, their impact is comparable to the Imode pedestal relaxation events [31].The trajectory of such discharges in figure 11(a) is almost vertical, so unlike other ELM mitigation techniques, impurity seeding evades Eich's model.In case of discharges with confinement degradation (such as #39 046) the discharge trajectory is better correlated with the predictions of the model.The ratio ϵ ||,peak∆T /ϵ ||,peak,Eich can be reasonably well parametrised using the ELM frequency f ELM , as shown in figure 12(b).In these discharges, where the injection of impurities resulted in an increase of f ELM , the changes in peak ELM energy fluence can be characterised by the following model: This scaling describes the evolution of ϵ ||,peak∆T /ϵ ||,peak,Eich within the boundaries of validity of the Eich model (between 1:1 and 3:1 lines) but also measurements below the 1:1 line.It is natural to suspect that f ELM is not the real driving parameter but perhaps a proxy which is correlated with other undetected parameters with some link to the concentration of impurities and neutrals.However, at the time of writing these parameters have not been resolved.On the other hand, f ELM is a convenient parameter for practical purposes, with a potential for implementation in real-time control systems.
It can be seen that the variation of ϵ ||,peak∆T /ϵ ||,peak,Eich is significantly larger (factor 8 in #39 056) than that of f buf , which may be surprising since the energy and peak ELM energy fluence of the ELMs are tightly correlated (as demonstrated in figure 4(a)).As mentioned earlier, the additional dynamic range of ϵ ||,peak∆T /ϵ ||,peak,Eich stems from the precision of Eich's model-as reported by Eich the experimental values even without any effect of impurities vary between 1:1 and 3:1 line.
As demonstrated in figure 12(a), the largest reduction of ϵ ||,peak∆T /ϵ ||,peak,Eich is achieved for ELMs of small size (∆W mhd ∼2%), which is consistent with the favourable scaling with f ELM .Note that the Eich model presented in equation ( 16) was verified only for Type I ELMs.While the ELMs in prior to the beginning of the seeding are clear Type I ELMs, it is in practice impossible to verify the nature of the buffered ELMs due to the dynamic evolution of the discharges.However, since significant reduction of ϵ ||,peak∆T can be achieved without confinement degradation, it is still assumed that these buffered ELMs are relevant to ITER.
ELM buffering relation to detachment and plasma confinement
The use of impurities to dissipate power in the SOL draws its inspiration from the physics of detachment, which is relevant to inter-ELM plasma conditions.However, a comparison of the ELM buffering fraction with the proxy for inter-ELM temperature T div (see figure 13) shows a complex relationship between the impact of impurities in intra-and inter-ELM plasma conditions.
The reference discharge #39 487 is clearly attached, with T div ∼ 30 eV.In case of argon-seeded discharges there is a notable drop in T div to ∼5 eV even when f buf is small.For high f buf there is a clear anti-correlation between the two quantities, meaning that with high f buf the discharge approaches partial detachment.However, reaching T div ∼ 0 eV was only possible at the cost of a degraded confinement.
In the case of nitrogen-only seeding, the T div and f buf appear to be de-correlated-the discharge #35 158 reaches negative T div (which is considered as an indication of partial detachment) without a significant ELM buffering.This suggests that a mixture of different impurities may be required to achieve simultaneously inter-ELM detachment (using light impurities) and ELM buffering (using heavier impurities).This strategy is consistent with the idea of significantly different electron temperatures being relevant in the two phases of the ELMy H-mode.
When considering the suitability of impurity seeding for future thermonuclear reactors, it is essential to assess the effect of injected impurities on the confinement, here represented by the H-factor H 98 (y,2).The relation between the buffered fraction and the quality of confinement is shown in figure 14(a).The pre-seeded phases of discharges and the reference discharge #39 487 are characterised by H 98 (y,2) = 0.9.
Argon seeding was initially allowed to reach H 98 (y,2) = 1.However, then confinement typically deteriorated to H 98 (y,2) ∼ 0.8 for high buffered fractions.Nitrogen seeding was not capable of producing strong buffering, however it helped to improve the H-factor up to 1.1 (this behaviour has already been reported at ASDEX Upgrade [32]).The optimal results were achieved by a combination of argon and nitrogen seeding with high buffered fraction (∼60%) and reasonable Hfactor of 0.95 (discharge #39 050).Obtaining higher buffering fractions (⩾60%) was always only intermittent and typically already followed by a strong degradation of confinement or even a disruption.This appears to be consistent with the simple model for the limit of ELM buffering presented in section 5f buf above ∼60% would require a reduction of ion temperature during the ELM transport in the SOL. Figure 14(b) shows clearly that increased f buf always correlated with an increase of Z eff .
Conclusions and outlook
Experiments with argon, neon and nitrogen seeding were used to investigate the ELM buffering phenomena at AUG.It was observed that argon is necessary to achieve significant buffering fraction f buf > 40%, however argon-only seeded discharges suffered from degradation of confinement to H 98 (y, 2) ∼ 0.8.This shortcoming was resolved by the addition of nitrogen, which on its own was unable to produce significant ELM buffering, however it had a positive role in the confinement of the discharge and also aids in achieving inter-ELM detachment.
A simplified model of ELM cooling using divertor impurity concentration measurements was proposed for the prediction of the magnitude of buffered ELM energy.While some discharges with argon-only seeding seem to be in a reasonable agreement with the model, there is still a large room for its improvements.The model predicts the maximum buffering to be between 50%-60% depending on the Z.
The studied discharges were in agreement with Eich's model for the peak ELM energy fluence, except for the case of strong buffering, where the measured ϵ || was up to three times lower than the prediction of the model.These results suggest that mitigation of divertor heat loading by a mix of radiative impurities may help to alleviate the potential component damage caused by edge transients.The reduction of ϵ || is large enough to become marginally acceptable for ITER from the point of view of PFC melting risk.Since the buffering is not expected to change the ion energy, there can still be detrimental consequences for PFCs due to increased material erosion during ELMs.
As a next step, the process of ELM buffering will be studied using the 1D3V particle-in-cell code BIT1, which is capable of simulating the temporal evolution of an ELM as well as different collisional processes in the SOL.The modelling will help to improve the simplified model presented in this work.
In future experiments, it would be desirable to develop a real-time control system, which would regulate the impurity seeding rate in order to maintain a desired impacting ELM energy.The ELM energy proxy based on divertor shunt current measurements represents an example of a sensor which could be implemented in such a system.
Figure 1 .
Figure 1.Overview of main plasma parameters of discharge #38 320 with argon seeding regulated by the real-time control system on P div .
Figure 2 .
Figure 2. Measurements of T surf [ • C] during ELMs without seeding (a) and during argon seeding (b) in #38 320.Yellow line indicates time frame with minimum temperature, pink line the time frame with a subsequent maximum of temperature.Blue and red dashed lines indicate the beginning and end of an ELM as measured by the shunt currents.
Figure 3 .
Figure 3.Comparison of ELM energy (a) and peak ELM energy fluence (b) deposited at the target by two different methods of IR analysis.
Figure 4 .
Figure 4. Correlations between ELM energy and fluence as measured at the outer target (a) and the ELM released energy measured by W mhd and the predictions of peak ELM energy fluence using the Eich model (b).
Figure 5 .
Figure 5.Comparison of ELM energy measured by the IR camera with a proxy based on shunt current measurements.Each point represents an individual ELM.
Figure 6 .
Figure 6.Evolution of ELM energy in reference discharge without seeding #39 487 (a), argon seeded discharge #38 320 (b) and discharge with combined argon and nitrogen seeding (c).ELM quantities are averaged over 10 consecutive ELMs.
Figure 7 .
Figure 7. Dependence of buffered fraction on ELM frequency in case of nitrogen (a), argon (b), argon + nitrogen (c) and neon + nitrogen seeding (d).The red area indicates highly efficient ELM buffering.Square symbols represent measurements obtained using ELM energy proxy.Magenta outline indicates measurements with H 98 (y, 2) < 0.8.
Figure 8 .
Figure 8.Comparison of the heuristic model for ELM buffering with experimental observations.
Figure 9 .
Figure 9.Time evolution of ELM electron temperature with varying impurity density (a), the temperature dependence of the cooling factor Lz (b).The values of Lz were obtained from [29].
Figure 11 .
Figure 11.Measurements of peak ELM energy fluence ϵ ||,peak compared to the predictions of ELM model [3] (a) and the evolution of the normalised fluence in time for selected discharges (b).
Figure 12 .
Figure 12.Comparison of the peak ELM energy fluence and the ELM size (a) and the dependence of the peak ELM energy fluence on ELM frequency (b).
Figure 14 .
Figure 14.Confinement quality expressed using H 98 (y, 2) versus the buffered fraction f buf (a) and relation between Z eff and the buffered fraction (b). | 6,792.8 | 2023-08-29T00:00:00.000 | [
"Physics"
] |
DSS-TRM: deep spatial–spectral transformer for hyperspectral image classification
ABSTRACT In recent years, the wide use of deep learning based methods has greatly improved the classification performance of hyperspectral image (HSI). As an effective method to improve the performance of deep convolution networks, attention mechanism is also widely used for HSI classification tasks. However, the majority of the existing attention mechanisms for HSI classification are based on the convolution layer, and the classification accuracy still has margins for improvement. Motivated by the latest self attention mechanism in natural language processing, a deep transformer is proposed for HSI classification in this paper. Specifically, deep transformer along the spectral dimension and the spatial dimension are explored respectively. Then, a deep spatial-spectral transformer (DSS-TRM) is proposed to improve the classification performance of HSI. The contribution of this paper is to make full use of self attention mechanism, that is to use transformer layer instead of convolution layer. More importantly, a DSS-TRM is proposed to realize end-to-end HSI classification. Extensive experiments are conducted on three HSI data sets. The experimental results demonstrates that the proposed DSS-TRM could outperform the traditional convolutional neural networks and attention based methods.
Introduction
Hyperspectral image (HSI) could provide the spectral and spatial information of ground objects at the same time (L. Zhang et al., 2016). These abundant information could be used to distinguish different classes of ground objects . However, the abundant information would lead to highdimensional data characteristics, which would make the "curse of dimension" problem serious. In this context, the critical issue of HSI classification is how to use the abundant spatial-spectral information (Liu et al., 2018).
Aiming at taking advantage of the rich spectral information, numerous traditional machine learning methods are introduced into the HSI classification tasks. These methods include knearest neighbor, support vector machine (SVM) (Fauvel et al., 2008), logical regression (J. Li et al., 2013), extreme learning machine (W. Li et al., 2015), random forest (Peerbhay et al., 2015) and so on. Simultaneously, some spectral dimension reduction method like principal component analysis (PCA) (Yang et al., 2017), independent component analysis (ICA), linear discriminant analysis (LDA) (C. H. Li et al., 2011) are utilized to improve the classification efficiency of HSI.
In remote sensing image, the closer the distance between two pixels is, the more likely they are to be regarded as the same class of object (Guo & Zhu, 2019). This means that considering the spatial neighborhood information in the classification process will help to improve the classification accuracy. One of the most common ways to consider spatial information is to introduce neighborhood information in the process of feature extraction. These feature extraction methods considering both spectral and spatial information are called spatial-spectral feature extraction method. At present, spatial-spectral features based classification method has become the mainstream method in HSI classification. The classic spatial-spectral feature extraction methods are local binary patterns (LBP) (Sen Jia, Hu et al., 2017), Gabor features (S. Jia, Deng et al., 2017), morphological profiles . Compared with only using spectral features, these feature extraction methods greatly improve the classification performance of HSI. However, the biggest disadvantage of these feature extraction methods is that they rely on artificial feature extraction rules.
Deep learning methods could automatically learn hierarchical feature representation in an end-to-end manner and don't need the process of hand-crafted, thus the research of the HSI classification methods based on deep learning has become a hotspot in recent years (Audebert et al., 2019;S. Li et al., 2019;L. Ma et al., 2019). Typical deep learning methods include deep belief network (DBN) (T. Li et al., 2014), stacked auto-encoder (SAE) (Chen et al., 2014), recurrent neural networks (RNNs) (Mou et al., 2017a), convolutional neural network (CNN) Guo & Zhu, 2019;Lee & Kwon, 2017;Yu et al., 2020;M. Zhang et al., 2018) and so on. These deep learning based methods have been widely used for the classification of HSI. Particularly, CNN have achieved great success on HSI classification. 1D-CNN is first used to extract spectral features in HSI. But it requires the input to be a one-dimensional vector. To make full use of spatial-spectral information, many researchers designed 2D-CNN models to extract discriminative features. 2D-CNN usually combined with dimension reduction methods like PCA is easy to miss channel relationship information and lack detailed spectral information, thus 3D-CNN is proposed to extract the spectral-spatial features. In addition, residual network (Mou et al., 2017b;Xue et al., 2021), dense connected network and other modern network structures are introduced to make the network for HSI classification easier to train.
Although the aforementioned deep learning based methods make great progress on HSI classification performance. How to use fewer labeled samples to obtain higher classification accuracy has always been the goal of HSI classification research Wang et al., 2021). Attention is an inherent signal processing mechanism of human brain. Human brain quickly selects the areas that need attention from visual signals, which is commonly known as attention focus, and then focuses on the details of these areas. The attention mechanism of human vision greatly improves the efficiency and accuracy of visual information processing (Huang et al., 2021;Xu et al., 2021). Inspired by this, researchers introduced attention mechanism into deep learning model for visual task and natural language processing task to improve the performance of the model. As an effective method to improve the performance of CNNs, attention mechanism is also widely used for HSI classification tasks. For example, attention mechanisms is introduced to a ResNet to make the model learn more discriminative spatial-spectral features (Haut et al., 2019). A spectral-spatial attention network (SSAN) is designed to capture discriminative features from HSI cubes . 3D attention module is also introduced to enhance the expressiveness of the features .
Attention mechanisms for HSI classification have achieved significant improvement. However, the majority of the existing attention mechanisms for HSI classification are based on the convolution layer, and the classification accuracy still has margins for improvement because CNN is not good at modeling the long-distance dependencies and obtaining global context information (Dosovitskiy et al., 2020;Tan et al., 2021). By contrast, the transformer model can better utilize the global context information within a large range by treating the input image as the sequential patches. Based on self attention mechanism, transformers are first proposed for machine translation, and have since became the state of the art method in many natural language processing (NLP) tasks (Vaswani et al., 2017). On account of NLP successes, multiple works try combining CNN-like architectures with self attention. Motivated by the latest self attention mechanism in NLP, a deep transformer is proposed for HSI classification in this paper. Specifically, deep transformer along the spectral dimension and the spatial dimension are explored respectively. Then, a deep spatial-spectral transformer (DSS-TRM) is proposed to improve the classification performance of HSI.
The main contributions in this article are concluded as follows: Firstly, we explore the application of self attention mechanism to improve the classification accuracy of HSI, which provides a new method for HSI processing and analysis. Secondly, a DSS-TRM is proposed to pay attention to the discriminative features of the spectral dimension and the spatial dimension respectively. This makes the model obtain higher classification accuracy. Thirdly, huge amount of experiments are conducted on three HSI datasets. As far as we know, this is the first time that the transformer model is used to extract features along the spectral and spatial dimensions respectively, which undoubtedly provides a new direction for the study of hyperspectral image classification. The experimental results demonstrate that the proposed DSS-TRM could outperform the traditional convolutional neural networks and attention based methods.
Proposed framework
In this work, we develop a novel classification framework (DSS-TRM) for HSI classification. As shown in Figure 1, the proposed framework based on transformer consists of a spectral self attention model (SpecSAM) and a spatial self attention model (SpatSAM). The SpecSAM learns to pay attention to the features along the spectral dimension and the SpatSAM learns to pay attention to the features along the spatial dimension. The features extracted by SpecSAM and SpatSAM are fused to input the the classifier, this enables the proposed framework to make better use of spectral and spatial information to improve the classification accuracy.
Transformer
Transformer is a novel deep learning model based on self attention mechanism and feed-forward neural network. Different from the convolution layers commonly used in HSI classification, the transformer layer can calculate the feature representations completely dependent on self attention mechanism, which can obtain rich informational and robust feature representations. A trainable deep learning model based on transformer can be built by stacking several transformer layers. More specifically, this deep model can be described manually by three parts: positional encoding, self attention and feed-forward network, as shown in Figure 2.
Positional encoding
Different from CNN or RNN models, transformer layers contain no convolution and no recurrence. Therefore, in order to enable the model based on transformer layers to make use of the order of the sequence and position information, positional encoding is introduced. The positional encoding operation can output a positional vector with the same dimension as the input feature vector. Formally, sine and cosine functions of different frequencies are used to produce positional vectors: (1) where pos denotes the position index of feature vector. Let the length of feature vector be L, then the range of pos is 0, 1, . . ., L-1. d model is the feature dimension, and i = 0, 1, . . ., d model = 2 denotes index of the feature dimension. After positional encoding, some position information about the relative or absolute position is injected into the original feature vectors. It should be noted that the positional coding operation is performed only before the first transformer layer, and the sum of the obtained positional vectors and the original feature vectors is the input of the first transformer layer.
Self attention
Self attention mechanism is the core part of a transformer layer. Self attention mechanism can be regarded as a mapping function that maps a query vector and a set of key-value pairs to an output. The output of the self attention mechanism is the weighted sum of the value vectors, and the weights assigned to each value vector are calculated from the query vectors and the corresponding key vectors. In this work, scaled dot-product attention is used.
, key matrix and value matrix, respectively. X is the feature vector sequence. d k is the dimension of feature vector.
Furthermore, multiheaded attention (MHA) mechanism is introduced to improve the performance of the self attention model. Specifically, in a transformer layer, multiple query matrices, key matrices and value matrices are generated simultaneously, and multiple output features are generated according to Eq. (2). This means that one input vector corresponds to multiple output vectors, which can enable the model to extract richer feature representations. Then, multiple output vectors are concatenated and multiplied by a matrix parameter to obtain the final output vectors. Formally, multihead attention mechanism is as follows: where W O is a learned parameter matrix, XQ W i ; XK W i ; XV W i are the ith matrix parameter. Just as different convolution kernels can extract different features, different heads of the MHA mechanism in transformer layers can learn different attentions.
As shown in Figure 2, self attention mechanism maps a feature vector sequence X to another feature vector sequence Z containing the information about the original word or pixel and the relationship between words or pixels.
Feed-forward network
The feed-forward neural network is the last part of a transformer layer. In the proposed method, two fully connected layers are used to build the feed-forward network. In a fully connected layer, the features are caculated by: where i is the feature vector index. It should be noted that the parameters in the feed-forward network are shared across all features in the corresponding transformer layer. n addition, as shown in Figure 2, residual connections are introduced into each self attention layer and feed-forward network layer to improve the trainability of the model and make full use of the features extracted at different levels.
Deep spatial-spectral transformer
Compared with convolutional neural network, transformer has less training parameters and could learn more abstract features by using self attention mechanism. So, it could improve the accuracy of classification and recognition task. HSI provide not only spectral information but also abundant spatial information. Consequently, we use the transformer model from both spectral and spatial dimensions. As shown in Figure 1, we call the proposed framework DSS-TRM.
Pixel embedding
Transformer takes feature sequence as input. So hyperspectral data cube need to be transformed into feature sequence. Each sample of a HSI is a pixel, so we call this conversion process pixel embedding. DSS-TRM consists of a SpecSAM and a SpatSAM. As for the spectral dimension, we use a convolution layer to transform the image blocks of different bands into a one-dimensional feature vector. The length of the feature sequence is equal to the number of bands. Then, the feature sequence could be input into the SpecSAM to learn to pay attention to the features along the spectral dimension. It is notable that the SpecSAM is a model with several stacked transformers.
As for the spatial dimension, PCA is applied to the hyperspectral data cube, and the first three principal components are selected. Referring to the relevant research of transformer on image processing (Dosovitskiy et al., 2020), the image block (the selected three bands) is divided into 16 patches with the same size along the spatial dimension. Specifically, patches are divided from top to bottom and from left to right. Similar to the spectral dimension, a convolution layer is applied to map the 16 patches into 16 one-dimensional feature vectors. The length of the feature vectors is equal to the number of the convolution kernel. In this way, the feature vector sequence along the spatial dimension is obtained, and then the sequence is input to another stacked transformer model (SpatSAM).
It should be noted that SpecSAM and SpatSAM actually focus on different features, even though the both inputs are image blocks during pixel embedding. In SpecSAM, image blocks in each band are converted into feature vectors, and all feature vector sequences are arranged in band order. In SpatSAM, image blocks composed of three principal components are divided into patches in spatial order before pixel embedding, so the generated feature sequences are arranged in spatial order. In addition, PCA makes SpatSAM more suitable for learning spatial features, while the input of SpecSAM contains all spectral information.
Feature fusion
The SpecSAM and SpatSAM are responsible for extracting important spectral and spatial features, respectively. The extracted spectral and spatial features are input into a multilayer perceptron (MLP) with two fully connected layers, respectively. Then DSS-TRM need to fuse the spectral and spatial features to futher improve the classification performance. In this work, we tested three feature fusion methods. The first method is concatenating the spectral and spatial features. The second method is point-wise addition. The third method is point-wise multiplication. Then, the fusion features are input into a MLP to output the label.
The proposed DSS-TRM is an end-to-end framework. It could be trained by the back propagation algorithm. In this work, the widely used Adam (adaptive moment estimation) optimizer is adopted to train the framework.
Experiments
In this section, extensive experiments are carried out and the results are analysed in detail to demonstrate the effectiveness of the proposed method. All the algorithms are implemented by Pytorch library and all experimental results are generated on a computer equipped with an Intel(R) Xeon(R) Gold 6152 CPU, an Nvidia A100 PCIE and 256 GB memory.
Data description and experimental design
Three widely used hyperspectral data sets are selected for experiments to verify the effectiveness of the proposed method in HSI classification. The detailed data description is described below.
The first data set, University of Pavia (UP), were collected by Reflective Optics Spectrographic Imaging System (ROSIS) over the city of Pavia, Italy. The spatial size of the data set is 610 × 340 pixels with a 1.3 m/pixel spatial resolution, and the spectral range covers from 430 to 860 nm with 103 bands after removing 12 noisy bands. Besides the unlabeled pixels, the data set consists of 9 manually labeled classes. The number of training samples and testing samples used in the experiments is listed in Table 1.
The second data set, Salinas (SA), were collected by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over the region of Salinas Valley, CA, USA. The spatial size of the data set is 512 × 217 pixels with a 3.7 m/pixel spatial resolution, and the spectral range covers from 400 to 2500 nm with 204 bands after removing 20 noisy bands. Besides the unlabeled pixels, the data set consists of 16 manually labeled classes. The number of training samples and testing samples used in the experiments is listed in Table 2.
The third data set, Indian Pines (IP), were collected by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over the region of north-western Indiana.
The spatial size of the data set is 145 × 145 pixels with a 20 m/pixel spatial resolution, and the spectral range covers from 400 to 2500 nm with 200 bands after removing 20 noisy bands. Besides the unlabeled pixels, the data set consists of 16 manually labeled classes. Referring to relevant researches, however, only 9 labeled classes are used in this paper to avoid a few classes that have very few training samples. The number of training samples and testing samples used in the experiments is listed in Table 3.
In the experiments, the data cube within the pixel neighborhood is used for pixel embedding. Specifically, for spectral dimension, 16 × 16 × C (C denotes the number of bands in HSI) cubes around pixels are selected as input data. For spatial dimension, 32 × 32 × 3 cubes around pixels after dimensionality reduction are selected as input data. The purpose of selecting large neighborhood is to make full use of the spectral and spatial information in HSI and further improve the classification accuracy. It should be noted that the pixels to be classified are located in the center of the patches, and in each patch with an even size, the right and bottom sides have one more row of pixels than the left and top sides.
Hyperparameter settings
Similar to other related researches, the classification performance of the proposed method is further improved by searching for the optimal hyperparameters. The influence of three main hyperparameters, depth of the model, learning rate and feature fusion method, on classification accuracy is explored, and the detailed experimental results and analyses are as follows.
The proposed method, DSS-TRM, mainly consists of transformer layers. The number of transformer layers directly determines the depth of the model and mainly affects the feature representation ability of the model. To simplify the space of model structure, the depth of SpecSAM and SpatSAM are always consistent in the experiments. The number of transformer layer is denoted as variable L, and its range is set as 2, 4, 6, 8, 10. Figure 3 shows the relationship between model depth and overall classification accuracy. As we can see, with the increase of the number of transformer layers, the overall classification accuracy presents a trend of first rise and then decline on the three HSI data sets. For the UP and IP data sets, the optimal value of L is 6; for the SA data set, the optimal value of L is 8.
The value of learning rate directly affects the training effect of the model. Appropriate value of learning rate can enable the model to obtain better training effect with limited labeled samples and thus achieve higher classification accuracy. Loss value is the most direct index reflecting the effect of model training. Therefore, the influence of different learning rates on the loss value of the model in the training process is explored, and the results are presented in Figure 4. It can be seen that a larger learning rate (lr = 0.0001, lr is the abbreviations of learning rate) can always make the model obtain a smaller and more stable training loss, which means that the model has stronger abstract representation ability at this time. Therefore, the learning rate is uniformly set as 0.0001 in the experiments. Then, we analyze the influence of different feature fusion methods on the classification accuracy. Table 4 summarizes the experimental results. In Table 4, SpecSAM means that only the features along the spectral dimension extracted by SpecSAM model are used for classification, while SpatSAM means that only the features along the spatial dimension extracted by SpatSAM model are used for classification. The symbol -means concatenation operation between features, the symbol + means point-wise addition, and the symbol × means point-wise multiplication. These three operations can jointly utilize features extracted along spatial and spectral dimensions. It can be seen from the statistics that, on the whole, the classification accuracy of SpecSAM or SpatSAM is lower than that obtained by using spectral and spatial features simultaneously. This indicates that the classification accuracy can be further improved by using both spatial and spectral information, and the effectiveness of the structure of the proposed model is also verified. In the three feature fusion methods including concatenation, addition and multiplication, multiplication operation can always enable the model to obtain higher overall classification accuracy. Therefore, the point-wise multiplication operation is selected as the feature fusion method in the subsequent experiments.
In addition, other hyperparameters and basic experimental settings are given directly by referring to relevant literatures and researches. In the training process, the number of iterations is set as 600, the batch size is set as 64, and Adam algorithm is used for optimization to ensure that network parameters could be fully updated and optimized. In the applied self mechanism, the number of head is 8. In the process of pixel embedding, the number of convolution kernel is uniformly set to 128, so the dimension of feature vectors extracted along spectral and spatial dimension is both 128. At the end of the designed model, the MLP that plays the role of classification is composed of two fully connected layers, and the number of neuron is 128 and K, respectively (K denotes the number of class in HSI). In addition, the cross entropy loss is used as loss function for model training.
Results and analysis
To verify the advantages of the proposed method in HSI classification, two machine learning methods, two classical CNN-based methods and two advanced methods based on attention mechanism are selected for comparative experiments. These methods are briefly described below.
RBF-SVM (Radial Basis Function-SVM): A classical classifier widely used in HSI classification. When processing high-dimensional data, RBF-SVM can achieve better classification performance compared with other machine learning classifiers.
EMP+SVM (Extended Morphological Profiles + SVM): Firstly, EMP features are extracted from HSI, and then RBF-SVM are used to complete classification. Compared with RBF-SVM, the introduction of EMP features can make better use of the spatial features in HSI, so as to obtain the higher classification accuracy. In the experiments, the parameters of EMP are set by referring to the relevant literature .
3D-CNN (Y. Li et al., 2017): classical supervised deep learning model, which can fully extract the spatial-spectral features in HSI utilizing 3D convolution. Specifically, this method consists of two 3D convolutional layers and one fully connected layer.
S-CNN (Liu et al., 2018): In this method, pixel pairs are taken as inputs, and based on the CNN model, the loss function is modified to realize metric learning, so as to ensure that the same classes cluster together and different classes separate from each other in the deep metric space. DBMA (W. : DBMA is short for the double-branch multiattention mechanism network. In the DBMA model, two network branches are built to extract spectral and spatial feature in HSI respectively. In addition, two different types of attention mechanism are applied in the two branches respectively, so as to further improve the classification accuracy.
CACNN : This method first extract spectral-spatial features in HSI using 2D and 3D CNN respectively, then utilize the NonLocalBlock serving as a typical attention mechanism to combine these two kinds of features. Finally, a deep multilayer feature fusion strategy is used to combine the features of different hierarchical layers, so as to further improve the classification accuracy.
For a fair comparison, all methods are trained with 200 labeled samples. The hyperparameters and basic experimental settings of the methods used for comparison are consistent with the relevant literatures. Specifically, overall classification accuracy (OA), average classification accuracy (AA) and kappa coefficient are selected as the quantitative evaluation measurements. In addition, to reduce the fluctuation of classification results caused by the randomness of sample selection, the average value of 10 experiments is used as the final result to measure the classification performance.
Tables 5-7 list the experimental results of different methods on the three HSI data sets. Several observations can be obtained from the statistical results in the tables.
(1) The classification performance of traditional classification methods is worse than that of deep learning-based classification methods. The traditional classification methods are all shallow models, which cannot make full use of the deep features in HSI, so they cannot obtain satisfactory classification results. More specifically, the classification accuracy of EMP+SVM is significantly higher than that of SVM in all the three HSI data sets, indicating that the introduction of EMP features can effectively improve the classification performance. (2) Both 3D-CNN and S-CNN use convolution operation to extract the spatial-spectral features in HSI, so they can effectively improve the classification accuracy compared with traditional methods. Furthermore, S-CNN achieves metric learning by improving network structure and loss function, so its classification performance is generally better than that of 3D-CNN. (3) The introduction of attention mechanism can further improve the classification performance of deep learning models. In the three HSI data sets, the classification performance of the method equipped with attention mechanism is better than that of the general deep learning methods. For example, the OA of DBMA, CACNN and DSS-TRM in SA dataset is above 98.5%, which is about 2.5-3.2% higher than that of 3D-CNN and S-CNN. (4) Among all the classification methods in Tables 5-7, the proposed method, DSS-TRM, can obtain the best classification results. Compared with other deep learning models, the advantages of DSS-TRM lie in the full use of spatial-spectral information and self attention mechanism. On the one hand, the proposed method first extracts the deep features along the spectral dimension and the spatial dimension, respectively, and then performs feature fusion, which can make full use of the spatial-spectral information in HSI. On the other hand, building a backbone network by stacking transformer layers containing self attention mechanism, and introducing residual connections for feature reuse, can enable the model to focus more on the deep features beneficial to the classification task, thereby obtaining better classification performance.
The average value of classification accuracy can show the classification performance of different methods from the view of statistics. To compare the stability of the classification results of different methods, three box plots are drawn based on OA. In Figure 5, different colors partition different classification methods, and circles (○) represent outliers in the experimental results. In general, the stability of classification results of deep learning models is better than that of traditional classification methods, and the introduction of attention mechanism can further improve the stability of classification results. The box corresponding to the proposed method possesses the smallest length, indicating that the classification results of the proposed method are the most stable among all the methods.
Finally, we draw classification maps using the label predictions of different methods, and compare and analyze the classification results from a visual perspective. Compared with the quantitative measurements, classification maps can display classification results more intuitively. As shown in Figures 6-8, as OA increases, the misclassification phenomenon and noise in the classification maps gradually decrease. The classification map of the proposed method is the closest to the ground truth, indicating that the classification results of the proposed method can better restore the real distribution of surface features.
Influence of the number of training samples
Deep learning models need enough labeled samples for network optimization, while in practice, it is very difficult to accurately label the pixels in HSI. Therefore, the deep learning models for HSI classification should be adaptable to the change in the number of training samples. To explore the classification performance of different methods when training samples gradually decrease, the number of training samples is reduced from 200 to 100 at an interval of 20 for the experiments. As can be seen from Figure 9, with the reduction of training samples, the classification accuracy of all classification methods gradually decreases. The accuracy curves of the three deep learning models based on attention mechanism, DBMA, CACNN and DSS-TRM, change relatively smoothly, indicating that they possesses better adaptability to the change in the number of training samples. In addition, it is noted that the accuracy curve of the proposed method is always higher than that of other methods, indicating that it possesses the best classification performance when the training samples are gradually reduced.
Influence of the spatial size of the input cubes
In the experiments, cubes with a certain size around the center pixels are selected as the inputs of the model, so as to make full use of the spatial-spectral information in HSIs. Obviously, the spatial size of the input cubes can affect the classification accuracy to a certain extent. Therefore, by combining cubes of different size along the spectral and spatial dimension, the influence of the spatial size of the input cubes on the classification results of the proposed method is explored. It can be seen from Table 8 that when the size of the cubes along the spectral dimension is fixed, the classification accuracy rises as the size of the cubes along the spatial dimension increases. When the size of the cubes along the spatial dimension is fixed, the classification accuracy rises first and then declines with the increase of the size of the cubes along the spectral dimension.
Conclusion
In this work, we introduce the transformer model, which is widely used in natural language processing, into HSI classification. Based on transformer, we build self attention model of spectral dimension and self attention model of spatial dimension. Then we fuse the features of the two models to form spatial-spectral features for final classification. The proposed framework (DSS-TRM) could use self attention mechanism to extract important features in spectral and spatial dimensions. Therefore, compared with CNN, the proposed DSS-TRM could improve the classification accuracy. Experimental results on three real HSI data sets demonstrate that the DSS-TRM could outperform the CNN and CNN based attention models. Moreover, this work provides a novel means for HSI processing and analysis.
Disclosure statement
No potential conflict of interest was reported by the author(s). Table 8. OA of the proposed method when the spatial size of the input cubes is changed. | 7,232.4 | 2022-01-08T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
DGPolarNet: Dynamic Graph Convolution Network for LiDAR Point Cloud Semantic Segmentation on Polar BEV
: Semantic segmentation in LiDAR point clouds has become an important research topic for autonomous driving systems. This paper proposes a dynamic graph convolution neural network for LiDAR point cloud semantic segmentation using a polar bird’s-eye view, referred to as DGPolarNet. LiDAR point clouds are converted to polar coordinates, which are rasterized into regular grids. The points mapped onto each grid distribute evenly to solve the problem of the sparse distribution and uneven density of LiDAR point clouds. In DGPolarNet, a dynamic feature extraction module is designed to generate edge features of perceptual points of interest sampled by the farthest point sampling and K-nearest neighbor methods. By embedding edge features with the original point cloud, local features are obtained and input into PointNet to quantize the points and predict semantic segmentation results. The system was tested on the Semantic KITTI dataset, and the segmentation accuracy reached 56.5%
Introduction
LiDAR sensors are essential devices for environmental perception tasks in smart vehicles, as they can scan millions of 3D points for each frame [1][2][3]. In recent years, LiDARbased semantic segmentation technology has achieved rapid development. However, LiDAR point clouds have the characteristics of irregular structure, uneven density, and sparse distribution, which are challenging problems for deep learning approaches.
Three-dimensional (3D) segmentation methods [4,5] based on machine learning, such as support vector machine (SVM), random forest, naïve Bayesian supervised learning, etc., usually utilize the geometrical or distribution features of point clouds to train models. The feature extraction [6,7] process of large-scale LiDAR point clouds is computationally intensive, which limits machine learning approaches for outdoor environment perception tasks. Meanwhile, because LiDAR point cloud density is high in the near field and loose in the far field, such methods have poor adaptability and expansion capabilities [8]. From the traditional machine learning method to various deep neural networks, the semantic segmentation approaches of projected views, points, voxels, and graphs have been widely researched. The multiview projection and voxel mapping methods lead to feature information loss. Meanwhile, due to the uneven point distribution density in far and near fields, the sampled unstable features cause low performance of the trained network model. PolarNet [9] has been proposed to rasterize the polar coordinates of LiDAR points into regular grids as input into a convolution neural network (CNN) model for semantic segmentation. Figure 1 compares the density distribution of a LiDAR point cloud frame in the Cartesian and polar BEV coordinate systems. The density distribution is more uniform under the polar BEV coordinate system. Although PolarNet solves the problem of the semantic segmentation. Figure 1 compares the density distribution of a LiDAR cloud frame in the Cartesian and polar BEV coordinate systems. The density distr is more uniform under the polar BEV coordinate system. Although PolarNet sol problem of the uneven density of LiDAR point clouds, the applied feature ext method by using max-pooling operations causes the information loss of detailed g rical features.
(a) (b) This paper proposes a dynamic graph convolution network for LiDAR poin semantic segmentation using a polar bird's-eye view, referred to as DGPolarNet, as in Figure 2. The proposed DGPolarNet input is the original point clouds, and the is the semantic segmentation results. Firstly, the LiDAR point clouds are converted lar coordinates, which are registered into regular grids to balance the input DGPolarNet. Then, a dynamic feature extraction module generates edge features on perceptual points of interest sampled by the farthest point sampling (FPS) and K est neighbor (KNN) methods. Finally, the extracted edge features are combined w This paper proposes a dynamic graph convolution network for LiDAR point cloud semantic segmentation using a polar bird's-eye view, referred to as DGPolarNet, as shown in Figure 2. The proposed DGPolarNet input is the original point clouds, and the output is the semantic segmentation results. Firstly, the LiDAR point clouds are converted to polar coordinates, which are registered into regular grids to balance the input data to DGPolarNet. Then, a dynamic feature extraction module generates edge features based on perceptual Remote Sens. 2022, 14, 3825 3 of 18 points of interest sampled by the farthest point sampling (FPS) and K-nearest neighbor (KNN) methods. Finally, the extracted edge features are combined with the original point cloud through skip connections to recover spatial information lost and enhance local features with describable semantic information. The extracted dynamical edge features are input into a convolutional neural network to provide discriminant features for the semantic segmentation network.
Remote Sens. 2022, 14, x FOR PEER REVIEW 3 of 1 original point cloud through skip connections to recover spatial information lost and en hance local features with describable semantic information. The extracted dynamical edge features are input into a convolutional neural network to provide discriminant feature for the semantic segmentation network. The main contributions of this paper are as follows: (1) The semantic segmentation based on a polar bird's-eye view (BEV) solves the problems of the sparse distribution and uneven density of LiDAR point clouds. (2) The edge features generated from the points o interest sampled by FPS and KNN are more discriminants than the local features com puted by KNN.
Related Works
In this section, we briefly survey the related works of 3D semantic segmentation, in cluding neural network models to process converted multiviews, voxels, points, and graphs.
Multiview Projection Methods
To reuse deep neural network models for semantic segmentation in 2D images, 3D point clouds are projected onto 2D images or spherical images for pixel-wise semanti labeling. Lawin et al. [10], Boulch et al. [11], and Tatarchenko et al. [12] mapped 3D poin clouds onto multiple 2D images from different viewports, which were input into deep learning networks, such as convolutional neural network (CNN), full convolutional net work (FCN), and fully convolutional U-shaped network with skip connections, for pixel wise semantic labeling. Using an inverse mapping procedure, semantic segmentation wa implemented in 3D point clouds. Su et al. [13] proposed a sparse lattice network (SPLAT Net) with a bilateral convolution layer. Original point clouds were mapped to a sparse lattice, in which valid grids were input into bilateral convolution layers for hierarchica and spatial features extraction. SPLATNet was a joint 2D-3D network with a series of 1 × 1 CONV layers for the fusion of the extracted features of 3D point clouds and multiview images. Wu et al. [14] proposed a semantic segmentation framework, SqueezeSeg. It uti lized spherical projection to transform sparse and irregular 3D point clouds into dense 2D grids, which were input into a convolutional network, SqueezeNet, for feature extraction The segmentation results were further optimized based on conditional random fields Similarly, Milioto et al. [15] proposed RangeNet++ operating on the range views of spher ical projection from LiDAR point clouds. To improve computation speed performance they utilized a GPU parallel programming technique to speed up the postprocessing o KNN operations. However, such 3D semantic segmentation methods based on 2D projec tion were sensitive to viewport selection and had low accuracy performance when pro cessing point clouds of sparse distribution.
Su et al. [16] proposed a multiview convolutional neural network (MVCNN) tha captured 2D images from different viewports and aggregated them into 3D shape de scriptors by convolutional and pooling operations. These descriptive features were fed into a semantic segmentation network. However, this method only allowed selecting fixed The main contributions of this paper are as follows: (1) The semantic segmentation based on a polar bird's-eye view (BEV) solves the problems of the sparse distribution and uneven density of LiDAR point clouds. (2) The edge features generated from the points of interest sampled by FPS and KNN are more discriminants than the local features computed by KNN.
Related Works
In this section, we briefly survey the related works of 3D semantic segmentation, including neural network models to process converted multiviews, voxels, points, and graphs.
Multiview Projection Methods
To reuse deep neural network models for semantic segmentation in 2D images, 3D point clouds are projected onto 2D images or spherical images for pixel-wise semantic labeling. Lawin et al. [10], Boulch et al. [11], and Tatarchenko et al. [12] mapped 3D point clouds onto multiple 2D images from different viewports, which were input into deep learning networks, such as convolutional neural network (CNN), full convolutional network (FCN), and fully convolutional U-shaped network with skip connections, for pixelwise semantic labeling. Using an inverse mapping procedure, semantic segmentation was implemented in 3D point clouds. Su et al. [13] proposed a sparse lattice network (SPLATNet) with a bilateral convolution layer. Original point clouds were mapped to a sparse lattice, in which valid grids were input into bilateral convolution layers for hierarchical and spatial features extraction. SPLATNet was a joint 2D-3D network with a series of 1 × 1 CONV layers for the fusion of the extracted features of 3D point clouds and multiview images. Wu et al. [14] proposed a semantic segmentation framework, SqueezeSeg. It utilized spherical projection to transform sparse and irregular 3D point clouds into dense 2D grids, which were input into a convolutional network, SqueezeNet, for feature extraction. The segmentation results were further optimized based on conditional random fields. Similarly, Milioto et al. [15] proposed RangeNet++ operating on the range views of spherical projection from LiDAR point clouds. To improve computation speed performance, they utilized a GPU parallel programming technique to speed up the postprocessing of KNN operations. However, such 3D semantic segmentation methods based on 2D projection were sensitive to viewport selection and had low accuracy performance when processing point clouds of sparse distribution.
Su et al. [16] proposed a multiview convolutional neural network (MVCNN) that captured 2D images from different viewports and aggregated them into 3D shape descriptors by convolutional and pooling operations. These descriptive features were fed into a semantic segmentation network. However, this method only allowed selecting fixed viewports and ignored a large amount of geometric space information, which was not suitable for large-scale complex scenes. To generate discriminative information, Feng et al. [17] proposed a group-view convolutional neural network (GVCNN) based on MVCNN. The GVCNN used a fully convolutional network to extract view-level descriptors. In addition, grouping modules were used to learn association information and distinguish information between different viewports. Based on content-based discrimination, group-level descriptors were generated by dividing multiviews into several groups. The network applied a fully connected layer to complete the identification task. Classification performance was significantly improved by emphasizing discriminative information in the 3D shape descriptor, which was discovered at the group level. However, the GVCNN used maximum pooling for multiview clustering operations, which lost some useful information. Wang et al. [18] proposed a recurrent clustering and pooling convolutional neural network (RCPCNN). The RCP module was designed to generate dominant sets based on the similarity of multiple views. The RCPCNN grouped clusters of the dominant set, which were used for network updating by the pooling method. To improve the adaptability of the semantic segmentation network, the RCP module used the cyclic clustering and pooling module to generate feature vectors automatically. To fully exploit the correlation between multiple views, Ma et al. [19] combined the long short-term memory (LSTM) network with a CNN for 3D shape recognition. The 2D CNN network was used to extract the low-level features for each image of the view subsequences. The extracted features were input into the LSTM network as a time series. Through a sequence voting layer, the extracted features were aggregated into shape descriptors. This model fully utilized the advantages of CNN and LSTM, which effectively improved the discriminative ability of multiview descriptors.
Voxelization Methods
Maturana et al. [20] proposed VoxNet to convert unstructured point cloud data into grid data as the input of a 3D CNN. Point clouds were mapped to multiple 3D grids using an occupancy grid algorithm. The value of each grid cell was normalized and fed into the convolutional layers of the network to generate feature maps. A max-pooling method was performed on nonoverlapping blocks of voxels. Rethage et al. [21] proposed a fully convolutional point network (FCPN) for semantic voxel labeling. In low-level feature abstraction layers, the FCPN employed PointNet with a uniform sampling strategy to ensure the permutation invariance of local geometric feature extraction. The high-dimension feature extraction network formed an octree-like nonoverlapping feature space to reduce memory costs. To enhance the performance of convolutional networks on sparse 3D voxelized data, Graham et al. [22] designed submanifold sparse convolutional networks (SSCNs), which fixed the active sites of sparse convolutions to keep sparsity stable over multiple layers. To solve the submanifold dilation problem, the SSCNs only process active voxels. Moreover, strided operations incorporating pooling or strided convolution were introduced for data fusion in the hidden layers between disconnected components. Wu et al. [23] proposed 3DShapeNet, which represented 3D geometry features as a probability distribution of binary variables on 3D voxels. Binary tensors were fed into three layers of convolutional filters to extract features. Although these methods solved the problem of processing unstructured point clouds, the memory usage may increase cubically with the increase in resolution.
Riegler et al. [24] proposed OctNet with adaptive spatial partitioning capability that replaced fixed-resolution voxels with a flexible octree structure, effectively solving the problem of the high memory consumption of voxels. It divided the 3D space hierarchically into a set of unbalanced octree structures. In the octree structures, each leaf node stored a pooled feature representation. The 3D space was partitioned based on data density. Computational storage resources were dynamically pooled based on the input 3D structure, which greatly reduced memory consumption and improved computational speed. Wang et al. [25] proposed an O-CNN that implemented convolution operations only on sparse octants occupied by 3D surface boundaries. The O-CNN used the sparsity of the octree representation and the local orientation of the shape to reasonably allocate memory and improve computational efficiency. Xu et al. [26] proposed a learning-free 3D point cloud segmentation strategy based on the octree structure. The graph structure was gener-ated based on local contextual information for point cloud voxelization and clustering of voxels. Perceptual grouping was utilized to segment 3D point clouds in a purely geometric way. This method had better classification results for complex scenes and objects with nonplanar surfaces. However, the traversing process of the octree structure caused high time complexity for generating a high-resolution octree.
To solve the problems of the high memory consumption and long training time of voxelized networks, Li et al. [27] proposed a field-probing neural network for 3D data (FPNN). The 3D space was represented by a 3D vector field as the input to the network. Instead of a convolutional layer in the CNN, a field-probing filter was used to extract features from the 3D vector field efficiently. This way, the computational complexity only depended on the number of field-probing filters and sampling points, and it was not affected by the resolution of the voxel map. Le et al. [28] proposed a point-grid hybrid model, PointGrid, which sampled a certain number of points in each grid cell with a simple point quantization strategy. From these sampled points, local ensemble features were extracted. Tchapmi et al. [29] proposed SEGCloud, which voxelized 3D point clouds and generated coarse downsampled voxel labels by using 3D-FCNN. The voxel labels were interpolated back to the 3D points by trilinear interpolation layers. The original 3D point features were combined with the resulting class labels after interpolation. By using fully connected conditional random fields, the final class labels were inferred to obtain a finegrained semantic segmentation level. In SEGCloud, the shared computation using fully convolutional networks reduced a certain amount of computation consumption. Although the voxel-based semantic segmentation methods utilized lossy compression on sparse point clouds as preprocessing to reduce computational memory consumption, loss of geometric and topological information of point clouds was caused.
Point Methods
To avoid loss of original information by projection or voxelization, Qi et al. [30] proposed a point-wise semantic segmentation network, PointNet, that directly operated on unordered point clouds without sampling preprocessing. PointNet learned geometric features through shared multilayer perceptrons (MLPs), which were input into symmetrical max-pooling functions for global features extraction. However, PointNet based on pointwise MLP focused on the perception of global geometric information and ignored the topological structure of local features of points. To extend the PointNet applicability to complex scenes, Qi et al. [31] proposed a deep hierarchical feature learning network, PointNet++. PointNet++ exploited an individual PointNet to generate local features in multiscale set abstractions and utilized hierarchical concepts to extract global features.
Considering both orientation awareness and scale awareness, Jiang [32] proposed a PointSIFT module that supported various PointNet-based architectures. The module used orientation-encoding convolution to generate multiscale representation from eight orientations. A set abstraction module in the downsampling stage and a feature propagation module in the upsampling stage were combined to obtain the features. Li et al. [33] proposed SO-Net, which constructed a self-organizing map (SOM) to model the spatial distribution of point clouds. The SOM node features were aggregated into a global feature vector using average pooling. A parallel branching network with fully connected branches and convolutional branches was used to recover a single feature vector from the global features to represent the input point cloud.
Because LiDAR point clouds have an imbalanced distribution, several invalid grids have been registered in far fields for traditional grid rasterization methods [34,35]. BEV maps represent point cloud data from a top-down perspective without losing any scale and range information [36,37]. By projecting raw point clouds into a fixed-size polar BEV map, Zhang et al. [9] proposed a PolarNet that extracted the local features in polar grids and integrated them into a 2D CNN for semantic segmentation.
Graph Methods
Currently, the graph neural network (GNN) research proposed by Scarselli et al. [38] has been widely studied for 2D and 3D semantic analysis tasks. Bruna et al. [39] applied convolutional neural networks in non-Euclidean domains modeled with graphs to perform local filtering in both the spatial and spectral domains. Simonovsky et al. [40] proposed an edge conditional convolution network on a local graph neighborhood. By integrating edge labels and an asymmetric edge function, the relationship between local points was established. Landrieu et al. [41] introduced an attribute-directed graph hyper point graph with edge features. Gated graph neural networks and edge conditional convolution were utilized to obtain contextual information for large-scale point cloud segmentation. Landrieu et al. [42] proposed a 3D point cloud oversegmentation strategy to improve segmentation accuracy. Jiang et al. [43] aggregated point features into edge branches in a hierarchical manner to enhance the description of local features. Kipf et al. [44] proposed a semi-supervised classification network, the graph convolutional neural networks (GCN). To extend the applicability of the GCN, Te et al. [45] proposed a regularized graph convolutional neural network (RGCNN), which mainly consisted of graph construction, graph convolution, and feature filtering layers. In each convolution layer, the graph Laplacian matrix was updated to improve the flexibility to adapt to dynamic graphs. The Chebyshev polynomial was also used to reduce computational complexity. Experiments indicated that the RGCNN had good performances in both point cloud classification and segmentation tasks of different densities. Wang et al. [46] developed a dynamic graph convolutional neural network (DGCNN). To obtain the local features of point clouds, the DGCNN used the EdgeConv operation to extract the features of centroids in a local neighborhood map. EdgeConv only considered the point coordinates and the distances of the neighboring points. Because the vector directions between neighboring points were ignored, some local geometric information was eventually lost in this method.
To enhance local feature representation, this paper proposes DGPolarNet, an inheritance derived from DGCNN, PolarNet, and PointNet++. The local discriminate edge features generated from the points of interest sampled by FPS and KNN discriminate more than the local features computed by KNN. The extracted local features and the original point cloud data are aggregated for semantic segmentation to prevent feature loss during the convolution and pooling operations.
DGPolarNet for LiDAR Point Cloud Semantic Segmentation
Through a comprehensive analysis of global and local features, this paper proposes DGPolarNet, a dynamic graph convolution network with FPS and KNN for LiDAR point cloud semantic segmentation based on a polar BEV, as shown in
BEV Polar Converter
To solve the invalid grid waste problem of processing the uneven distribution of Li-DAR scanning, the polar BEV coordinate system is utilized to register 3D point clouds into regular grids. Using Equation (1), the 3D point (x, y, z, t) is converted into the polar coordinate (r, θ, t), where (r, θ) is the polar coordinate defined by Equations (1) and (2), and t is the intensity value of the laser reflection.
= arcsin y + + The points in the polar BEV grid defined as ( , , ) ∈ are rasterized into a 3D array ( ) of size ( × × ), which is then input into the FPS-KNN dynamic network as the first layer of the backbone network. For the first layer ( ) , is the batch size, and is the number of points in each batch. Each point has three attributes { , , }; thus, = 3.
FPS-KNN Dynamic Network
The FK-EdgeConv (FPS-KNN EdgeConv) method is developed by integrating FPS and KNN algorithms to extract the comprehensive edge features of the nearest and farthest vertices, as shown in Figure 4.
BEV Polar Converter
To solve the invalid grid waste problem of processing the uneven distribution of LiDAR scanning, the polar BEV coordinate system is utilized to register 3D point clouds into regular grids. Using Equation (1), the 3D point (x, y, z, t) is converted into the polar coordinate (r, θ, t), where (r, θ) is the polar coordinate defined by Equations (1) and (2), and t is the intensity value of the laser reflection.
The points in the polar BEV grid defined as p i (r i , θ i , t i ) ∈ P are rasterized into a 3D array V (1) of size n 1 1 × n 1 2 × n 1 3 , which is then input into the FPS-KNN dynamic network as the first layer of the backbone network. For the first layer V (1) , n 1 1 is the batch size, and n 1 3 is the number of points in each batch. Each point has three attributes {r, θ, t}; thus, n 1 2 = 3.
FPS-KNN Dynamic Network
The FK-EdgeConv (FPS-KNN EdgeConv) method is developed by integrating FPS and KNN algorithms to extract the comprehensive edge features of the nearest and farthest vertices, as shown in Figure 4. The FPS-KNN dynamic network constructs a directed graph for each layer using the FK-EdgeConv method. For the l-th network layer, the dynamic graph ( ) is defined as Equation (3), where the datasets ( ) and ( ) with a dimension of ( ( ) × ( ) × ( ) ) represent the set of vertices and edges, respectively.
To obtain more effective semantic features, FPS and KNN are utilized to generate directed graphs from the LiDAR point clouds instead of the fully connected edges, which suffer from high memory consumption. The FPS and KNN operations sample the ( ) farthest and ( ) nearest neighbors, respectively, from the vertices set ( ) . Thus, the edge set ( ) has 2 × ( ) directed edge elements, which are calculated based on the target point ( ) and the neighbor points using Equation (6).
Then, the vertices in ( ) and the edges in ( ) are input into the Conv2D and pooling operations to generate the output dataset ( 1) with a dimension of ( 1 The edge feature computation as the Conv2D is defined as ℎ , . We utilize max-pooling and min-pooling operations to extract local features from the sampled farthest and nearest vertices, respectively. Accordingly, the output ( ) ∈ ( ) of the FK-EdgeConv operation is denoted as Equation (7). The dataset ( ) is also the input of the (l+1)-th layer processed by the following FK-EdgeConv operation. In particular, only the directed graph of the first layer of the FPS-KNN dynamic network is built based on the points in the polar BEV coordinate system. The following layers are constructed using the features extracted from the previous layer. The FPS-KNN dynamic network constructs a directed graph for each layer using the FK-EdgeConv method. For the l-th network layer, the dynamic graph G (l) is defined as Equation (3), where the datasets V (l) and E (l) with a dimension of (n (l) 3 ) represent the set of vertices and edges, respectively.
To obtain more effective semantic features, FPS and KNN are utilized to generate directed graphs from the LiDAR point clouds instead of the fully connected edges, which suffer from high memory consumption. The FPS and KNN operations sample the k (l) farthest and k (l) nearest neighbors, respectively, from the vertices set V (l) . Thus, the edge set E (l) has 2 × k (l) directed edge elements, which are calculated based on the target point p (l) i and the neighbor points using Equation (6).
Then, the vertices in V (l) and the edges in E (l) are input into the Conv2D and pooling operations to generate the output dataset V (l+1) with a dimension of (n 3 . The edge feature computation as the Conv2D is defined as h p i , ε i j . We utilize max-pooling and min-pooling operations to extract local features from the sampled farthest and nearest vertices, respectively. Accordingly, the output p (l+1) i ∈ V (l+1) of the FK-EdgeConv operation is denoted as Equation (7). The dataset V (l+1) is also the input of the (l+1)-th layer processed by the following FK-EdgeConv operation. In particular, only the directed graph of the first layer of the FPS-KNN dynamic network is built based on the points in the polar BEV coordinate system. The following layers are constructed using the features extracted from the previous layer.
The FK-EdgeConv unit, as shown in Figure 3b, is developed to compute local features of the array V (l) = {v 3 ) for the l-th layer. The FPS and KNN algorithms are implemented on V (l) and output the feature array V (l) = {v is generated as the input of the following layer. In each layer, FK-EdgeConv is utilized to calculate the dynamic feature graph model as local semantic features, which are further aggregated for semantic feature enhancement. In our practice, we implement five FK-EdgeConv operations and four down operations accordingly.
After extracting the graph features of multiple layers, the down unit, as described in Figure 3c, is implemented to fuse the extract features of the layers l and l'. The inputs of the down unit consist of two feature sets of different sizes. The feature arrays V (l) and V (l ) are reshaped and concatenated into the aggregated feature array 3 ). Using the Conv+Relu operation on each batch of V (l ) , the higher-dimensional feature array V (l +1) is generated as the input of the following layer.
By using a skip architecture, the local features of FK-EdgeConv operations and down processes are joined as aggregated features, which are input into the postprocessing for global semantic segmentation. Similar to the skip architecture, the features generated by the FPS-KNN EdgeConv and down processes are reshaped by cropping and scaling operations to merge into the aggregated features in the concatenating procedure.
Postprocessing
The aggregation features generated by the FPS-KNN dynamic network are mapped back to their corresponding polar BEV grid as the input of postprocessing. The shared MLP is utilized to convolute the semantic segmentation prediction. In the l-th layer, the feature set v l is computed via Equation (8), where w lm pqt is the learnable parameter for the element (p, q, t) in layer m of the MLP. v l
Experiments and Analysis
The proposed model was tested on the SemanticKITTI [47] datasets. Compared with other typical semantic segmentation networks, the accuracy performances of the DGPolar-Net model with several critical parameters are analyzed and discussed in this section.
Datasets
SemanticKITTI is a dataset of LiDAR point clouds collected by Velodyne Lidar HDL-64E and annotated with point-level semantic labels. It consists of a total of 43,551 frames from 22 sequences collected from inner-city traffic, including 23,201 for training and the rest for testing. Each frame has around 104,452 points on average. There are a total number of 19 typical types of objects, including ground-related, structures, vehicles, nature, human, object, and outlier classes. The dataset is unbalanced in point counts for different objects. For example, there is a small number of motorcyclist objects in most scenes, and only a few points are labeled for the motorcyclist class. In our experiment, we used one sequence for validation and nine sequences for training.
Semantic Segmentation Performance
Experiments in this section were conducted using an Intel(R) Xeon(R) Silver 4110 CPU @ 2.10 GHz 2.10 GHz dual processor, an NVIDIA Quadro RTX 6000 graphics card, and 64 GB RAM. In our experiments, the points in the range of −50 m < x < 50 m, −50 m < y < 50 m, and −3 m < z < 1.5 m were mapped to the polar BEV coordinate system, which were then rasterized into the polar BEV grids with a resolution of (480 × 360 × 32). After analyzing the point distribution of the SemanticKITTI dataset, we specified the k value of each layer as equal to 20. In our experiment, we specified v 1 1 = 1, v 1 2 = 3, and v 1 3 = 1,843,200. Our DGPolarNet model had 14 layers, among which the 1st layer was the converted polar BEV grid data, the 2nd to 11th layers were FPS-KNN dynamic networks, and the 12th to 14th layers were for postprocessing. V (11) represented the aggregated features of the 11th layer, and V (14) represented the final semantic score of the 14th layer. The softmax function was utilized as the loss function. Table 1 illustrates the data dimension for each layer. In the 14th layer, there were 19 segmentation scores for 19 classes in SemanticKITTI accordingly. Figure 5 shows some samples of the semantic segmentation results using the proposed DGPolarNet method. To evaluate the semantic segmentation performance, the mean intersection-over-union (mIoU) is applied (Equation (9)), where the variables TP c , FP c , and FN c are the number of true-positive, false-positive, and false-negative predictions for class c, respectively, and C is the number of classes. Table 2 shows the segmentation mIoU performances on all the object classes of the SemanticKITTI compared with the state-of-the-art methods. Our mIoU achieved 56.5% on average. Using the proposed DGPolarNet, the average segmentation IoUs of ground-related regions, buildings, vehicles, nature regions, humans, and other objects were 86.4%, 90.1%, 45.20%, 81.35%, 43.83%, and 52.40%, respectively. Our method had good performance for motorcycles, trucks, bicyclists, roads, sidewalks, buildings, vegetation, and pole objects. However, its IoU was low for other-ground, motorcyclist, and bicycle objects, because the extracted features of these objects were not discriminative. tem, which were then rasterized into the polar BEV grids with a resolution of (480 × 360 × 32). After analyzing the point distribution of the SemanticKITTI dataset, we specified the k value of each layer as equal to 20. In our experiment, we specified = 1, = 3, and = 1,843,200. Our DGPolarNet model had 14 layers, among which the 1st layer was the converted polar BEV grid data, the 2nd to 11th layers were FPS-KNN dynamic networks, and the 12th to 14th layers were for postprocessing. ( ) represented the aggregated features of the 11th layer, and ( ) represented the final semantic score of the 14th layer. The softmax function was utilized as the loss function. Table 1 illustrates the data dimension for each layer. In the 14th layer, there were 19 segmentation scores for 19 classes in SemanticKITTI accordingly. 3 3 3 3 3 3 3 3 3 3 3 18 Table 2 shows the segmentation mIoU performances on all the object classes of the SemanticKITTI compared with the state-of-the-art methods. Our mIoU achieved 56.5% on average. Using the proposed DGPolarNet, the average segmentation IoUs of ground-related regions, buildings, vehicles, nature regions, humans, and other objects were 86.4%, 90.1%, 45.20%, 81.35%, 43.83%, and 52.40%, respectively. Our method had good performance for motorcycles, trucks, bicyclists, roads, sidewalks, buildings, vegetation, and pole objects. However, its IoU was low for other-ground, motorcyclist, and bicycle objects, because the extracted features of these objects were not discriminative. PointNet [30] extracted the global features from all the points directly and lacked the correlation among the local features. Meanwhile, PointNet implemented the semantic segmentation only using the 3D point coordinates without the intensity information. The la- PointNet [30] extracted the global features from all the points directly and lacked the correlation among the local features. Meanwhile, PointNet implemented the semantic segmentation only using the 3D point coordinates without the intensity information. The laser reflection intensities of different materials were distinguished from each other. Without the intensity information, the connected objects of different types were easily detected as one object. Thus, we introduced the intensity data as one input of the DGPolarNet to enhance the discriminative local features of the dynamic graph. Compared with PointNet, the mIoU performance of our model was improved by 41.9%. For the ground-related, road, sidewalk, parking, and other-ground regions, our method increased by 31.8%, 43.7%, 42.6%, and 18.6%, respectively. RangeNet++ [15] projected the original point clouds onto the 2D range view, which caused spatial structure information loss and rasterization errors. In particular. when processing vehicle and human objects, the mIoU was only 27.3% and 14.2%, respectively. We used the polar BEV converter to solve the problem of uneven distribution of point clouds and applied both FPS and KNN to preserve the local geometrical features. Thus, the mIoU of our model was improved by around 4.3% compared with RangeNet++ and much improved for the vehicle and human objects.
Although PolarNet [9] utilized the polar BEV system to balance the input distribution, the extracted feature for each grid cell was insufficient by a learnable simplified PointNet with a max-pooling operation. To retain geometrical features, we constructed the dynamic graph for each BEV grid. Meanwhile, the extracted high-level semantic features were enhanced by the skip architecture of all the intermediate layers. Compared with the PolarNet network, the mIoU of our model is improved by 2.2%. For objects of complex shapes, such as vehicles and humans, our method improved by 14.5% on average.
Instead of only using KNN, we sampled both farthest and nearest neighbors by integrating FPS and KNN algorithms, which reduced the discriminative feature loss in the local feature encoding process. We conducted the comparison experiments using the KNN method and FPS-KNN method under different k values in dynamic graph construction, as shown in Table 3. When the k value was specified as 20, the models achieved the best performance. When it increased higher, the performance of the two models degraded on the contrary. Because the FPS-KNN method constructed feature maps obtaining both the internal geometrical structures and external contours of objects, the encoded features of the dynamic graphs were more describable than only using KNN. Table 4 analyzes the DGPolarNet performances through true-positive (TP), falsenegative (FN), and true-negative (TN) samples of the semantic segmentation results. Accordingly, the precision (P), recall (R), and F1 scores were calculated by using Equation (10) to evaluate the semantic segmentation performances. The values of P, R, and F1 values in Table 4 indicated that the proposed DGPolarNet has lower segmentation accuracies for other-ground, bicycle, and motorcyclist objects than the other objects. Because the point distribution of other-ground was similar to that of road and sidewalk objects, and motorcyclist objects were also similar to person objects, the segmentation for such objects did not perform well. For bicycle objects, there were a small number of points scanned on the sample surface, which caused insufficient training of the network model.
Conclusions
This paper proposes DGPolarNet, an efficient approach for semantic segmentation in LiDAR point clouds. The polar BEV converter is utilized to rasterize the LiDAR points into regular polar grids of even point distribution. An FPS-KNN dynamic network is developed to construct dynamic directed graphs and extract the local features of each BEV grid. Employing skip connection, the graph features of each layer are aggregated into high-dimensional features. All the aggregated features of each BEV grid are then integrated into a shared MLP for semantic segmentation. We validate the proposed DGPolarNet on the SemanticKITTI dataset, which is more efficient than previous methods. | 8,547 | 2022-08-08T00:00:00.000 | [
"Environmental Science",
"Computer Science",
"Engineering"
] |
Atopy and Inhaled Corticosteroid Use Associate with Fewer IL-17+ Cells in Asthmatic Airways
Background Interleukin (IL)-17 plays a critical role in numerous immune and inflammatory responses and was recently suggested to contribute to the pathogenesis of nonatopic (non-eosinophil/neutrophil-dominant) asthma. We aimed to compare expression of IL-17 in bronchial airways between atopic and nonatopic asthmatics, with/without inhaled corticosteroid (ICS) use and to identify its major cellular source. Methods Bronchial biopsies from 114 patients with mild-to-moderate asthma were investigated: 33 nonatopic, 63 non-corticosteroid users, 90 nonsmokers. IL-17 expression was correlated with atopy and inflammatory cell counts (EPX, NP57, CD3, CD4, CD8, CD20, CD68), taking ICS use and smoking into account. Multiple linear regression analyses were used to determine the independent factors as well as the most relevant inflammatory cells contributing to IL-17 expression. Double immunostainings were performed to confirm the major cellular source of IL-17. Results In non-ICS users, nonatopic asthmatics had more IL-17+ cells in the airway wall than atopic asthmatics. In both atopic and nonatopic asthmatics, ICS use was associated with lower numbers of IL-17+ cells, independent of smoking. The number of IL-17+ cells was associated with the number of neutrophils (B: 0.26, 95% CI: 0.17–0.35) and eosinophils (B: 0.18, 95% CI: 0.07–0.29). The majority of IL-17+ cells were neutrophils, as confirmed by double immunostaining. Conclusions We show for the first time that atopy and ICS use are associated with lower numbers of IL-17+ cells in asthmatic airways. Importantly, IL-17+ cells were mostly neutrophils which conflicts with the paradigm that lymphocytes (Th17) are the main source of IL-17.
Methods
Bronchial biopsies from 114 patients with mild-to-moderate asthma were investigated: 33 nonatopic, 63 non-corticosteroid users, 90 nonsmokers. IL-17 expression was correlated with atopy and inflammatory cell counts (EPX, NP57, CD3, CD4, CD8, CD20, CD68), taking ICS use and smoking into account. Multiple linear regression analyses were used to determine the independent factors as well as the most relevant inflammatory cells contributing to IL-17 expression. Double immunostainings were performed to confirm the major cellular source of IL-17.
Results
In non-ICS users, nonatopic asthmatics had more IL-17 + cells in the airway wall than atopic asthmatics. In both atopic and nonatopic asthmatics, ICS use was associated with lower numbers of IL-17 + cells, independent of smoking. The number of IL-17 + cells was associated with the number of neutrophils (B: 0.26, 95% CI: 0.17-0.35) and eosinophils (B: 0.18, 95% CI: 0.07-0.29). The majority of IL-17 + cells were neutrophils, as confirmed by double immunostaining.
Introduction
Asthma is a chronic inflammatory disease of the airways, characterized by reversible airway obstruction and bronchial hyperresponsiveness (BHR) [1]. One of the oldest ways to discern asthmatic patients is based on the presence or absence of atopy [2]. Not surprisingly, the underlying airway pathology of atopic versus nonatopic asthma is different, showing high numbers of eosinophils, T lymphocytes and Th2 cytokines (interleukin (IL)-4 and IL-5) in atopic asthma versus high numbers of neutrophils and non-Th2 cytokines (IL-8) in nonatopic asthma [3]. One of the cytokines that was recently suggested to contribute to the pathogenesis of nonatopic (non-eosinophil/neutrophil-dominant) asthma is IL-17 [4].
IL-17, also called IL-17A, is a proinflammatory cytokine, implicated in the development of autoimmunity, tumorigenesis and host defenses against bacterial and fungal infections [5]. In the lung, increased levels of IL-17 have been demonstrated in inflammatory disorders like asthma and chronic obstructive pulmonary disease (COPD) [6][7][8][9][10]. IL-17 was first shown to be produced by activated CD4 + memory T cells [11]. Thereafter, a specific subset of Th cells, namely the Th17 cells, has been put forward as its main producer [12,13]. Th17 cells have been shown to mediate airway inflammation and hyperresponsiveness associated with noneosinophilic asthma in mice, and importantly do not respond well to glucocorticoid treatment [14]. In humans, Th17 cells have also been suggested to play a role in regulating a neutrophil and macrophage dominant type of inflammation in the lung, particularly in severe, steroidinsensitive asthma and COPD [6]. In line with this, IL-17 levels were found to correlate positively with sputum neutrophilia in severe asthma [7,15].
On the other hand, IL-17 has also been implicated in Th2 responses. In mouse models of asthma, Th17 cells were shown to home to the lung and enhance not only neutrophilic airway inflammation but also Th2 cell-mediated eosinophilic airway inflammation [16]. And in patients with allergic asthma increased levels of IL-17 were demonstrated after a challenge with house dust mite [17].
Although there has been substantial interest in elucidating the role of IL-17 in neutrophil-dominant/nonatopic asthma in humans [4,18], our understanding regarding this phenotype of asthma is still very limited. Although recent studies suggest that a higher level of IL-17 expression is associated with severe asthma, the atopic status was not included in their analysis [19][20][21][22]. In fact, there is no data comparing IL-17 expression between atopic and nonatopic asthma patients. We therefore investigated the expression of IL-17 in bronchial biopsies from a large cohort of well characterized atopic and nonatopic asthmatic patients, also taking into account the effect of inhaled corticosteroid (ICS) and smoking. Additionally, we identified the major cellular source of IL-17 in the airway walls of these asthma patients. We investigated 114 stable, mild-to-moderate subjects with current asthma from our large asthma cohorts that were recruited previously by our research group in the University Medical Center Groningen [23]. Atopic and nonatopic patients, with or without ICS use, aged between 19-71 years were included ( Table 1). All patients had a doctor's diagnosis of asthma and demonstrated reversibility and BHR to histamine and/or adenosine 5'-monophosphate (AMP) [23]. All patients also had alveolar and bronchial exhaled nitric oxide (NO) values on the Aerocrine NO system (Niox; Aerocrine AB, Stockholm, Sweden) measured in accordance with international guidelines as described in an earlier study [23]. Atopic status was determined by Phadiatop for all 114 patients using the ImmunoCap system (Phadia AB, Uppsala, Sweden), and expressed as ratios (fluorescence of the serum of interest divided by the fluorescence of a control serum). A positive Phadiatop was defined as patient serum/control serum >1. The Medical Ethics Committee of the University Medical Center Groningen approved the study protocol and all subjects gave written informed consent.
Immunohistochemical staining and cellular quantification of bronchial biopsies
Paraffin embedded bronchial biopsies were cut into 3-μm-thick sections. Sections were deparaffinized and, after antigen retrieval, incubated with appropriate polyclonal antibodies against IL-17 (R&D Systems, polyclonal Goat anti-Human, AF-317-NA), using the DAKO autostainer in three consecutive runs. The slides were included in random fashion in each run to avoid group wise staining (19) . The number of positive cells was counted by a blinded observer in the submucosal area 100 μm under the basement membrane in the biopsy sample (19) using Aperio Image Scope software. The same techniques had been already applied for immunohistochemical staining and cellular quantification of other inflammatory cells including: neutrophils (NP57, DAKO, Glostrup, Denmark), eosinophils (eosinophil peroxidase; EPX, laboratories of NA Lee and JJ Lee, Mayo Clinic, Scottsdale, AZ), macrophages (CD68, DAKO, Glostrup, Denmark), mast cells (AA1, DAKO, Glostrup, Denmark) and T-cells (CD3, CD4, CD8, DAKO, Glostrup, Denmark) [23]. Double immunostainings were performed to elucidate whether granulocytes are a source of IL-17 in bronchial biopsies of asthmatics. Primary neutrophil and eosinophil antibodies suitable for double staining with IL-17 were used; a polyclonal Rabbit anti-Human Myeloperoxidase (MPO) antibody (DAKO, Glostrup, Denmark) was used to identify neutrophils and a Mouse anti-Human EPX antibody (Mayo Clinic, Scottsdale, AZ, USA) to identify eosinophils. After deparaffinizing the slides, a heat-induced antigen (epitope) retrieval protocol was used and blocking for endogenous peroxidase was applied. As secondary antibodies, peroxidase conjugated Swine anti-Rabbit IgG Antibody (DAKO, Glostrup, Denmark) was used for detecting MPO stained cells, biotinylated labeled Rabbit anti-Mouse antibody (DAKO, Glostrup, Denmark) for detecting EPX stained cells and Alkaline Phosphatase conjugated Donkey anti-Goat IgG antibody (SouthernBiotech, USA) for detecting IL-17 stained cells. Double immunostaining with lymphocytes was unnecessary because the vast majority of the IL-17 + cells showed the morphology of granulocytes. This was confirmed by the MPO/IL17 and EPX/IL17 double immunostainings.
Statistics
All analyses were performed using SPSS software (version 19.0; SPSS Inc., Chicago, IL). Normality of distributions was assessed using histograms and/or p-p plots.
For quantitative variables analysis, one-way ANOVA followed by Tukey post-hoc test was performed for multiple comparisons and t tests or Mann-Whitney U tests was used for two samples comparison.
Chi-square tests were used to compare groups for dichotomous variables. Correlations were evaluated by Pearson (for normally distributed data) or Spearman (for non-normally distributed data) tests. Multiple linear regression analysis was used to assess the independent contribution of ICS use (yes/no), smoking (smoking vs. nonsmoking) and Phadiatop (atopic vs. nonatopic) to IL-17 expression (dependent variable). To find the most relevant inflammatory cells contributing to IL-17 expression, additional linear regressions were performed on inflammatory markers (neutrophil, eosinophil, T-cell, macrophage, mast cell) as independent variables (separately for each one or in combination with other inflammatory cells) and IL-17 as dependent variable, adjusting for atopy, smoking status and ICS use. For all statistical analyses, p values <0.05 were considered statistically significant.
Inflammatory cells counts in blood, sputum and bronchial biopsies
In the group of asthma patients who did not use ICS, there were trends towards lower blood eosinophil counts (p = 0.08) and lower percentage of sputum eosinophils (p = 0.06) in nonatopic asthma patients compared to atopic patients ( Table 2). In addition, nonatopic patients had more neutrophils in the bronchial submucosa than atopic patients, whereas the atopic asthmatics had more eosinophils (Table 2). In the group of patients who did use ICS, the nonatopic individuals had more CD8 + cells in the bronchial submucosa than the atopic ones ( Table 2).
Lower IL-17 expression in bronchial biopsies associated with atopy and ICS use
In the group of non-ICS users, there were significantly more IL-17 + cells in the bronchial submucosa of nonatopic asthmatics compared to atopic ones (Fig 1). In line with this finding, a negative correlation was found between IL-17 + cells numbers and the Phadiatop score (r s = -0.37, p<0.001) (Fig 2).
Both atopic and nonatopic asthma patients treated with ICS had lower numbers of IL-17 + cells than those without ICS treatment (Fig 1). There were 9 subjects in the atopic group and 5 subjects in the nonatopic group who used high doses of ICS (>1000ug daily) ( Table 1). There was no association between doses of ICS (high doses vs. mild-moderate doses) and cellular infiltrate.
There was a significant negative correlation between IL-17 + cells numbers and reversibility levels in the total population who did not use ICS (both atopic and nonatopic subjects) (r s = -0.33; p = 0.01) (S1A Fig) in line with a negative correlation between neutrophils levels and reversibility levels in the total population who used ICS (both atopic and nonatopic subjects) (r s = -0.27; p = 0.04). There was also a negative correlation between FEV 1 % predicted and IL-17 + cells numbers in the atopic individuals who did not use ICS (r s = -0.39; p = 0.01)(S1B Fig). There was no association between current smoking and IL-17 levels (S2 Fig) and current smoking had no effect on IL-17 counts in all groups of asthmatics.
IL-17 expression positively associated with neutrophilic inflammation
The number of IL-17 + cells in airway wall biopsies correlated significantly with the number of neutrophils, both in atopic (r s = 0.44; p<0.001) and nonatopic asthmatics (r s = 0.45; p = 0.009) (Fig 3A), and both in ICS users (r s = 0.35; p = 0.01) and non-ICS users ((r s = 0.48; p<0.0001) (Fig 3B). We found that the majority (~90%) of IL-17 + cells were granulocytes, mostly neutrophils, as indicated by double staining for IL-17 and MPO and nuclear morphology (Fig 4). In addition, we identified a few IL-17 + eosinophils, as indicated by double staining for IL-17 and EPX.
Discussion
This is the first study comparing cellular IL-17 expression in well characterized atopic and nonatopic asthma patients. We demonstrate that IL-17 was particularly expressed by neutrophils in the airway biopsies, contrasting with the paradigm that lymphocytes (Th17) are the main source of IL-17. Our results show that in patients who do not use ICS, nonatopic asthmatics have more IL-17 expressing cells in the airway wall than atopic asthmatics. In contrast, ICS use was associated with lower numbers of IL-17 expressing cells in both atopic and nonatopic asthmatics.
A new finding of our study is that IL-17 expressing cells in bronchial biopsies of asthma patients were predominantly granulocytes and not lymphocytes. We confirmed this by double immunostaining with IL-17 and MPO and by demonstrating a strong positive correlation between IL-17 expressing cells and neutrophils. Although, perhaps surprising, neutrophils have been reported as a source of IL-17 in humans [24,25] as well as in animal studies [25][26][27][28][29]. In vitro investigations also showed production of IL-17 by stimulated neutrophils with immune complex [27]. Eosinophils may be another source of IL-17, as suggested by double immunostaining of IL-17 and EPX, and by the significant correlation between IL-17 + cells and eosinophil numbers in atopic asthma patients. Previous findings in the literature are in line with our finding that IL-17 expressing cells in the airways may be granulocytes. Eustace et al showed that IL-17 in bronchial biopsies of COPD patients was expressed by neutrophils, next to mast cells, T cells, and B cells in the subepithelium of the small airways [30]. Molet et al demonstrated in asthma that eosinophils in sputum, brochoalveolar lavage fluid, and peripheral blood express IL-17 [10]. Finally, Tan et al demonstrated in children with cystic fibrosis that neutrophils and γδT cells in the airways produce IL-17, next to Th17 cells [31]. These data together support the reports showing the early sources of IL-17 are the innate immune cells and they have a central role in the initiation of IL-17-dependent immune responses, even before the first CD4 + T cell sees its cognate antigen and initiate the Th17 development program [32].
We found more IL-17 expressing cells in the airway wall of nonatopic than atopic asthmatics, that is those who did not use ICS. It has been suggested that IL-17 may contribute to the pathogenesis of neutrophil-dominant/nonatopic rather than to eosinophil-dominant/ atopic asthma [4]. Presence of fewer eosinophils and more neutrophils in our nonatopic asthmatic subjects and the significant contribution of both cell types to IL-17 + cells in our biopsies support this hypothesis. Interestingly, we found ICS use to be associated with lower IL-17 expression in bronchial biopsies of both atopic and nonatopic asthmatics. This is in line with a bronchial biopsy study in 10 patients with moderate-to-severe asthma (all atopic) demonstrating a significantly reduction in the number of IL-17 + cells in the airways after a 2-week course of oral corticosteroid treatment [8]. Accordingly, IL-17 levels in sputum of 15 mild-moderate and 15 severe asthmatics decreased after one month of ICS treatment [15]. In vitro data are also in line with these findings as corticosteroids could inhibit IL-17 induction of cytokines in epithelial cells and fibroblasts [10]. We have also shown before that corticosteroids inhibit IL-17A-induced IL-8 production of epithelial cells [33].
Regarding the effect of IL-17 levels on the lung function we found a negative correlation between FEV 1 % predicted and IL-17 levels in the atopic individuals who did not use ICS. In line with our finding, Irvin et al found a negative correlation between FEV 1 % and IL-17 levels in their asthmatic population [20]. Reduced airway patency due to IL-17 mediated airway inflammation may be responsible for this negative association, but also direct sensitization of airway smooth muscle may play a role, as has been suggested in mouse with house-dust miteinduced allergy [34]. Such a direct role of IL-17 in smooth muscle cell contraction is in accordance with findings of a clinical trial demonstrating clinically meaningful effects of anti-IL-17A, especially in a group with high reversibility of FEV 1 in response to albuterol [35]. However, our study seems to contradict these results as we found an inverse relationship between IL-17 expression and reversibility of FEV 1 to albuterol. A direct comparison between the two studies is unfortunately not possible, as Busse et al didn't measure expression of IL-17 levels in their studied population [35]. Clearly, more research is necessary to understand the "high IL-17 phenotype" of asthma and its consequences for personalized medicine.
In our study, IL-17 levels was significantly correlated with neutrophilic inflammation but smoking did not contribute to the expression of IL-17. This supports the previous finding by Doe et al where IL-17A and IL-17F expression in the submucosa of the lung tissue was not associated with smoking status in their asthmatics [9]. However, our finding contrasts with a study in healthy smokers and COPD patients, showing that smokers have more IL-17 expressing cells in the submucosa than nonsmokers [36]. We conclude that atopy and ICS use may associate with a lower expression of IL-17 and that there are contradictory findings regarding the contribution of smoking. One of the limitations of our study is that the scarce biopsy material did not allow further investigating a potential explanation for the effect of ICS on IL-17 + cells. A very recent study shows that IL-17A/IL-4 dual producing cells are important in asthma and may provide a potential explanation for ICS use decreasing IL-17A + cells [20]. Future studies on human biopsy staining are warranted.
Conclusion
In summary, we here show that the IL-17 + cells present in airway wall biopsies of asthmatics are mostly neutrophils and to a smaller extent eosinophils, and not, as the general paradigm assumes lymphocytes (Th17). This is of interest since nonatopic asthmatics who do not use inhaled corticosteroids have higher IL-17 expression in bronchial biopsies than atopic asthmatics, suggesting a potential role of IL-17 in the pathogenesis of nonatopic asthma. ICS use was associated with lower numbers of IL-17 + cells in both atopic and nonatopic asthmatics, suggesting a beneficial effect of ICS in general. | 4,180.4 | 2016-08-23T00:00:00.000 | [
"Medicine",
"Biology"
] |
An Improved Retrieval Method for Multi-Transaction Mode Consortium Blockchain
: The traditional method of blockchain retrieval is to search the “Block File” in sequence from the "tail" to the "head" of the blockchain, which always takes a lot of time. How to reduce the retrieval time has been a hot issue in blockchain research. This paper proposes a fast retrieval method for the Multi-Transaction Mode Consortium Blockchain (MTMCB). Firstly, we create a “User Set” and “Block Name Set” cached in Redis. Then, according to the transaction participants and “Block Name Set”, we can get the relevant "Block Name List", and quickly obtain the corresponding block files. On this basis, in order to meet the needs of rapid retrieval in large-scale systems, an improved retrieval algorithm based on a B+-tree data structure is proposed. Firstly, the block file information is put into different ordered sets according to the transaction participants, and the B+-tree index is established to quickly get the information of relevant block files by participants. Experimental results show that the improved method of Redis cache retrieval in this paper can greatly increase the efficiency of blockchain retrieval, and can settle some crucial problem in the blockchain application and popularization.
Introduction
The consortium blockchain is one of the three forms of blockchain, which is characterized by a weak centralized network. It has been widely used in many fields, such as asset, credit, time proof of key events, existence proof, and trading market [1][2][3]. The Multi-Transaction Mode Consortium Blockchain [4] was developed on the basis of retaining some original characteristics of the blockchain, realizing transaction type diversification, virtual nodes, and block cloud storage, allowing attached transaction data, etc., which makes the application of blockchain technology more flexible.
In the blockchain that supports "Bitcoin" transactions, the retrieval algorithm uses sequential searches from the "tail" to the "head" of the blockchain. When the number of blocks reaches a certain size, it takes a long time to retrieve an early block (close to the chain head). Block retrieval is the basis of blockchain-related services, and the existing sequential retrieval methods are inefficient, which seriously affects the performance of its application business system.
In recent years, researchers have proposed some techniques to improve the performance of blockchain retrieval. Google [5] has added Bitcoin, Ethereum blockchain data, and Ethereum classic ETC network plug-ins to BigQuery, and has employed artificial intelligence to make the blockchain searchable. Wang [6] proposed to establish a simple database index directory for medical records in medical blockchain applications, including the hash value information of the relevant blocks of the department and patient medical records. Ren [7] proposed a method, DCOMB (dual combination Bloom filter), combining the data stream of the IoT (Internet of Things) with the timestamp of the blockchain, to improve the versatility of the IoT database system. Shibata [8] proposed a retrieval scheme, where a client provides a computer program called a searcher that implements a randomized search algorithm such as a genetic algorithm. Lv [9] proposed a retrieval model based on the combination of chain and chain: the method builds an inverted index for the log data, and expands the block chain header node to store the inverted index in it. The subsequent log retrieval achieves the purpose of rapid positioning by sequential retrieval of the inverted index. Zhou [10] proposed a ledger data query platform called Ledgerdata Refiner. With ledger data analysis middleware, the platform provides sufficient interfaces for users to retrieve block or transaction efficiently. Do [11] proposed a private keyword search component designed for searching in the encrypted dataset. However, these studies generally either complicate the hierarchical structure of the blockchain itself by introducing third-party support, or the search performance improvement effect is limited.
Many experts have put forward patent applications for invention in the field of blockchain retrieval since 2015. The method for distributing and retrieving data on a blockchain network with peer nodes developed in [12] forms a blockchain network by forming peers, sharing and distributing private files, and sending messages to complete the request and private retrieval. The search method and system for business information of blockchain given in [13] improves the retrieval efficiency of blockchain business information by establishing a business-related index database through the unified use of personnel names, and does not need to rearrange the block content. The personalized privacy information retrieval method based on the blockchain as discussed in [14], encrypts the data through the encryption algorithm of the buyer and seller on the data trading platform, and decrypts the ciphertext with their own public key encryption algorithm to obtain the retrieval results and achieve content retrieval and intention privacy protection. These studies need to use relational databases or encryption algorithms, and database maintenance needs to monitor the block modification repeatedly. Although the efficiency has been improved, the block file still cannot meet the real-time requirements of the business system when it is large, and needs large system consumption.
In this paper, a fast blockchain retrieval method for the Multi-Transaction Mode Consortium Blockchain is proposed, which mainly includes the following: 1. A "Block Name File" is defined, which establishes an "index mechanism" with user files and block files, so as to improve the retrieval efficiency without affecting the security of user information. 2. According to the problem that the retrieval efficiency will decline with the increase of "Block Name File", combined with the excellent read-write performance of the Redis memory database, a fast retrieval method of "block name collection" under Redis cache is proposed. 3. In order to mitigate the problem that the retrieval efficiency of the "Block Name Set" will decline under the large block size and large user scale, a new "User Block Set" is designed to replace "Block Name Set" to participate in the retrieval, and a B+-tree index is introduced to improve the retrieval process.
Our experiments show that the algorithms proposed in this paper can significantly improve the retrieval efficiency, and the improved algorithm improves the retrieval efficiency even further and has better stability.
Multi-Transaction Mode Consortium Blockchain (MTMCB)
Multi-Transaction Mode Consortium Blockchain (MTMCB) inherits several features from the existing Blockchain technologies, such as relative decentralization, distributed storage, point-to-point communication, and secure encryption. On this basis, MTMCB generalizes "electronic currency transaction" in blockchain technology into "process and result of event processing", and redesigns "transaction verification mechanism" and "block distribution storage mechanism". As shown in Figure 1, MTMCB includes the Regulatory Node System (RNS) and the Transaction Node System (TNS). The RNS is deployed on the server, performing initialization, transaction processing, and audit services. The initialization operation only needs to be performed once. Transaction nodes can be PCs, mobile devices, or even automatic teller machines (ATMs), etc. After authorization, different user types write their initialization information into the "User File". For example, the storage structure is shown in Figure 2. In the MTMCB, the Regulatory Node is responsible for managing the "User File", which is used to record the correspondence between the user name and its address, where "Address" is 10 characters and records the user's transaction address. In the subsequent transaction records and retrieval process of the user, only the "Address" of the transaction participant is used instead of the "User Name", which effectively protects the privacy of the transaction participant.
The block includes a block header and a block body, as shown in Figure 3. The block body includes the transaction information and its SHA256 [15] value. The first block in the blockchain is the genesis block. Its block number is distinguished by a specific hash value. Each block (except the genesis block) stores the hash value of the previous block. In this way, the blockchain is formed between the blocks, and the information of the chain tail block is stored and updated by the chain tail file (system file, which is used to store the file name of the chain tail block of the blockchain and the hash value of the bank). Using the hash value of the previous block, we can retrieve from "tail" to "head".
Redis Cache Technology
For the retrieval of block files stored on disk, a corresponding index needs to be established in memory to reduce the number of disk reads and writes and speed up the retrieval speed. Currently, Redis cache technology is usually used to achieve fast indexing of block files. Redis [16,17] is a highperformance memory-based Key-Value database, which mainly solves the problem of timeliness of data processing in the case of high concurrency in relational databases. Because of its pure in-memory operation, Redis has excellent read and write performance. Throughputs of more than 100,000 read and write operations per second have been recorded [18]. Redis provides rich data types, including linked lists (List), strings (String), sets (Set) and ordered sets (Zset). All Redis operations are atomic, and Redis also supports the atomic execution of several operations. It also supports the calculation of unions, intersections and complements of sets in the service front-end, and supports a variety of sorting. At the same time, it provides APIs for Java, python, ruby, PHP, Erlang and other clients, which is suitable for various implementation occasions. Additionally, Redis also provides publish /subscribe, notification, key expiration, and other features.
One of the most important application scenarios of Redis is as business cache, which is used to keep some hot data that are not frequently changed but frequently accessed in memory [19], effectively reducing the number of database reads, reducing database pressure, improving response time, and enhancing throughput. Redis also provides the compressed lists as a data structure to store a series of data and its encoding information in a continuous memory area. The purpose is to reduce unnecessary memory overhead as much as possible under certain controllable time and complex reading conditions. [20] is a high balance tree, which has the advantages of high self-balance, low update cost and high query efficiency. An m-order B+-tree is either an empty tree or satisfies the following characteristics [21]:
B+-Tree Index
• Each node in the tree has, at most, M subtrees.
•
If the root node is not a leaf node, there are at least two subtrees.
•
If the Non-terminal nodes is not the root, there are at least one subtree.
•
Non-leaf nodes with K subtrees contain exactly k−1 keywords.
•
The keywords of all nodes in the subtree indicated by the pointer Pi-1 are less than Ki.
•
The keywords of all nodes in the subtree indicated by Pn are greater than Kn. • All leaf nodes are in the same layer of the tree structure, and contain the actual information.
When querying the B+-tree [22] with the goal to find all records whose key is K, one first needs find the minimum key Ki greater than K in the root node and then follows the pointer Pi-1 on the left of Ki to the node of layer 2. If K is greater than all of the key words in the root node, then follow the pointer Pn to the node of layer 2. In layer 2, the same method is applied to find the pointer to the node in layer 3, and so on, until, finally, a record pointing to the data file is found in the leaf node.
As a kind of balanced multi-fork tree, the B+-tree only has leaf nodes to store data, and the inner nodes are only used for indexing. It has significant advantages in searching external data (that is, disk data). Due to the long seek time and rotation delay time of traditional disks when reading and writing, it is necessary to reduce the disk IO times as much as possible when searching. The best case is to find the target index quickly, and then read the data from the disk. Using B+-trees can achieve this and has become the main search optimization technology of the mainstream database.
Application of Redis in Block Retrieval Algorithm
From the structure of the blockchain, each block stores the hash value (block file name) of its previous block. The blockchain is connected by one block hash value, but this connection is unidirectional, and the previous block can only be found by the latter block, otherwise it is not possible. Therefore, the traditional retrieval method obtains the hash value of the end-of-chain block from the end-of-chain file, and then finds the previous block through this hash value, and so on. Its sequential search time complexity is O(N). As the blockchain system is put into use, new blocks will continuously be generated and added to the chain. As time goes by, the number of blocks will become, larger and larger. The efficiency of this sequential retrieval method will decline rapidly. In serious cases, it may cause long-term operation, affecting the performance of the system.
Based on the characteristics of the weakly centralized network in the MTMCB, we introduce a "Block Name Set" by using the MTMCB Regulatory Node System (RNS). An index between user files and blocks is established and held in a Redis cache, so as to achieve the purpose of fast block name search and fast block content location.
Definition and Construction of "Block Name Set"
The "Block Name File" is a specific format of the consortium blockchain system file, which is managed by the regulatory node. When the blockchain system is storing blocks, the block name of the block and the corresponding addresses of all transaction participants of the block are simultaneously stored in the "Block Name File". The logical structure of "Block Name File" storage is shown in Figure 4.
The SHA256 value of this record # Figure 4. The logical structure of "Block Name File".
Among them, • "Block name" is a 32 byte hexadecimal number representing the block name of a block; • ";" is the separator; • "User Address" is a hexadecimal number of 20 bytes, representing the address of the transaction participants of this block; • "The SHA256 value of this record" is a 32 byte hexadecimal number, which is used to verify whether the data in this record have been tampered with; • "#" is the end character, indicating the end of this record.
Considering the high retrieval frequency of the "Block Name File" and "User File", the read performance of disk storage cannot meet the demand of fast retrieval. When the system is started, the two files are read and parsed, and stored in the Redis cache database in the form of "Block Name Set" and "User Set".
Redis is a NoSQL database technology based on the memory key-value structure [23]. By building a Redis database cluster, data are cached in memory and master-slave replication is realized, so that the high-frequency accessed data can be read directly from memory, effectively reducing the corresponding time of data query. The traditional relational database maps the logical model to a series of tables. Redis needs to map the logical relationship to one or multiple key value pairs. Therefore, the choice of the key structure is important. A reasonable design can effectively improve the query efficiency and save memory overhead.
In this retrieval process, the "Block Name Set" and "User Set" information are cached and stored, and their logical structure is shown in Figure 5. The traditional blockchain system only stores the block information after consensus verification, and the blockchain storage extension processing is to store the block name of the block and the corresponding addresses of all transaction participants of the block into the "Block Name File" and cache while realizing the blockchain system's block storage. The specific methods are as follows: Step 1: When the block is stored, a block file will be generated, which records the contents of the current block. For convenience, the file is named after the hash value of the block.
Step 2: Obtain the block file name of the block file at the same time of the block storage, extract the "User address" of one or more participants in the current block, and write the "Block Name File" in the block name file storage structure.
Step 3: Record the SHA256 value of the new block name file record memory for anti-tamper requirements.
Step 4: Update the "Block Name Set" in the Redis cache.
Block Retrieval Algorithm through Redis Cache "Block Name Set"
When retrieving data, first use the "User Set" in the Redis cache to determine the address of the user (participant), and then search in the "Block Name Set" according to this address to obtain the name of the block to be retrieved. The specific methods are as follows: Step 1: Obtain the names of one or more participants to be retrieved. Search the user set to find the user address from the user name. The user name here must be information that can identify the unique user to prevent users with the same name from affecting the search results. The system uses an ID number as the user name.
Step 2: In the "Block Name Set", match the line by line by "User address" to find the block file name containing this user.
Step 3: Find the block from the block folder, parse the data and return it as query result.
The specific process is shown in Figure 6. Through the above process, the block name file is searched in the Redis cache to obtain the file name list of all the blocks in which the queried user participates, and finally, the block files in the file name list are processed one-by-one in the disk file.
The differences between the application of Redis in block retrieval method and the traditional retrieval method are show in Figure 7.
Application of B+-Tree and Redis Cache in the Improved Block Retrieval Algorithm
The method of building the "Block Name Set" in the Redis cache can eliminate the inefficiency of traditional block chain search methods, which is from the end of the chain to the beginning of the chain one-by-one. However, as the system is put into operation for a longer and longer time, the number of users becomes larger and larger, resulting in a sharp increase in the number of transactions and blocks. According to Algorithm 1, each retrieval needs to traverse all the "Block Name Set". When the number of users participating in the transaction increases, the computational complexity will increase fast. To solve the above problems, this paper uses a B+-tree to reconstruct the user and block name storage structure in the Redis cache, and proposes an improved retrieval algorithm.
Improved Redis Cache Storage Model
The starting point of the retrieval method in the original MTMCB is "user name", and the search object is block files related to the "user name" of the transaction party. In order to achieve this function, the "User Name" substitute value "User Address" is used as the Key, and the combination of block file names is used as the Value that data type is "Set". The storage structure model is shown in Figure 8.
UserAddress BlockName;BlockName;……
Key Value User Block Set Figure 8. The Data storage structure of "user block collection" in Redis cache.
In order to be different from the "Block Name Set" in the original retrieval algorithm, this data file is named "User Block Set", which is used to store the block file name information of each user transaction and replace the "Block Name Set" in the Redis cache in the original retrieval algorithm. In this improved method, only "User Set" and "User Block Set" are stored in the Redis cache.
Although the reading of the Redis cache is relatively fast, the average time complexity of adding compressed lists is O(N). As N increases, the time consumption will also increase. Due to the large number of users participating in the transactions in the MTMCB, the number of users and files in the Redis cache is large, and the response time to retrieve multiple "User Addresses" still cannot meet the needs of the business system. In order to improve the retrieval efficiency, we use a B+-tree index to improve the retrieval efficiency of the "User Address". The logical structure of this metadata index is shown in Figure 9.
Initialization of Redis Cache Storage
1. When the system starts, read and analyze the "Block Name File", and initialize the "User Block Set" Redis database according to the "User address" and "Block Name" content in the block name file. 2. According to the Algorithm 2 in Listing 2, the B+-tree index of "User Block Set" is generated, and the corresponding relationship between the index and "User Block Set" is established. Set L and L2 link pointers; return newchildentry 3. Considering that the "User Set" also has many transaction users and low efficiency of sequential retrieval, another B+-tree index is established for "User Files" in a similar fashion, with the user name as keyword.
"User Block Set" Construction Algorithm
According to the storage expansion processing in the original MTMCB system, after the "Block Name File" is expanded, the block name information is stored in the "User Block Set" according to the transaction participant information.
The specific methods are as follows: Step 1: When the block is stored, a block file will be generated, which records the contents of the current block. For convenience, the file is named after the hash value of the block.
Step 2: There are one or more transaction participants in the event processing of the system Select one transaction participant and find the corresponding "User address" in the B+-tree index with the user name from the "User Set".
Step 3: Find the Key in the "user block file" according to the participant's address. If it exists, extend the value, write the block name to the end of the value, and sort by the time-stamp order. If it does not exist, add the key-value metadata. The key is the participant's address, the value is the new block file name, and update the B+-tree index.
Step 4: Repeat step 2 and step 3 to process the next transaction participant until all participants finish processing and complete this extension operation.
The above process is implemented by Algorithm 3. AppendUserfile(), as shown in Listing 3. else Add a new metadata for the "User Block Set", the "Key" value is UserAddress[i], and the "Value" value is BlockName
end for
Taking two user addresses "35f4ea7dc9ae3b9da2a390d91e5f3d67f89a4312" which is the existing user address and "3377487ea991e4cda1fdf5239da2a390995fb7c4" which is not existing as examples, the extension process is shown in Figure 10. If the user address exists, extend the "User Block Set", otherwise add a new metadata for the set.
Application of B+-Tree and Redis in Improved Block Retrieval algorithm
After the introduction of the "User Block Set", the block retrieval algorithm no longer relies on the "Block Name Set", but establishes an association relationship between the "User Set" and the "User Block Set". Assume that when the system needs to query all block files in which two users participate together. As long as the corresponding two addresses are found through the user file, and then the intersection of the set of block file names corresponding to the two addresses is found in the user block file, one can quickly get all the block files in which the users in question participate together. The retrieval process is shown in Figure 11.
The specific methods are as follows: Step 1: The retrieval method receives the names of one or more participants, and finds the "User address" from the "user name" by retrieving the "User Set".
Step 2: Find the Key where the "User Address" is located through the B+-tree index in the "User Block Set" and obtain all the block file name records of the user.
Step 3: Repeat step 2 until all relevant block file name records of the transaction participants are found.
Step 4: According to the block name file record set obtained in step 2 and step 3, a block file related to all participants is obtained, that is, the target file for this query.
Use the B+-tree to construct the fast location of participants and users and obtain the block file name based on the association between the "User Set" and the "User Block Set" in the Redis cache. Finally, read the "block file" on the disk for verification and processing, to locate the target block.
The above process is implemented by Algorithm 4 .RetrieveBlockFiles (), as shown in Listing 4. Taking participants "3401031976042323304" and "341203199702103187" as examples, the search process is shown in Figure 12. According to the block name set of the two, the intersection "blk04. dat" is the target block for storing the transactions of the two. Figure 12. Example block retrieval diagram for "User Block Set".
Time Complexity Analysis of Retrieval Algorithm
In this paper, the search algorithm time consists of two parts: B+-tree search time and "User Block Set" intersection time.
In order to illustrate the search time of B+-tree, suppose that the search probability of any item in the tree is p, the total number of users in the consortium blockchain is M, the total number of block files is N, the order of B+-tree is d, the number of orders is t, the average space utilization is f, 0.5 ≤ f ≤ 1 (f = used storage space / maximum available storage space), and each user participates in m blocks on average, then the relationship shown in Table 1 can be obtained. Table 1. B+-tree series, number of Block files and space requirements.
The number of orders
The number of Nodes
Average number of users (subtree)
The number of block files Space requirement of the tree In the B+-tree of this paper, all users are in the leaf node, so the number of users M can be expressed as: Therefore, the number t of orders of the B+-tree can be expressed as: Similarly, the relationship between t and the total number of block files N can be obtained: For the intersection time of user block files in the Redis cache, since the user block names have been sorted (Ordered Set), it can be known from the ordered set algorithm [24] that the time complexity is O(m). Therefore, the time complexity of the improved retrieval algorithm in this paper is O( . Among them, m is the average number of blocks that each user participates in, and m is far less than N. In terms of spatial complexity, the traditional retrieval method is O(N). The improved retrieval algorithm in this paper mainly adds the index structure of the B+-tree. The spatial complexity of the internal nodes of B+ is O(N/fd), so the total spatial complexity is O(N+N/fd).
The traditional block retrieval directly reads data from the disk in the order of the blockchain, and the time for each read from the disk is assumed to be H. Based on the analysis above, we present the time complexity of three retrieval methods (traditional block retrieval, application of Redis in block retrieval, and application of B+-tree and Redis in improved block retrieval) in Table 2: In terms of space complexity, since the number of users N is less than the number of block files M, the space complexity of all three retrieval methods is O (M). In terms of time complexity, BR, RBR, BRBR three block retrievals are: O (H * M), O (N + M) and ( ( +1) + ). When N and M are large, the efficiency of the BRBR algorithm in this paper is more advantageous.
Privacy and Security Concerns Of Non-Regulatory Nodes
In terms of tamper-proof performance, the MTMCB has adopted two methods to ensure security: 1. Blockchain cloud storage transformation algorithm: The complete blockchain is stored in the local and corresponding cloud of the regulatory node, and the blockchain stored in the cloud is the transformation of the local blocks; 2. Distributed storage: The blocks generated by the same exchange will be distributed and stored in the cloud directory of all relevant participants in its transactions.
Among them, the privacy and security concerns of non regulatory nodes (transaction nodes) includes: 1. All transactions recorded in the block use "UserAddress" instead of "UserName", which well isolates the privacy of transaction participants; 2. The transaction node only stores the blocks related to the participant, not the complete blockchain, and the system adopts the "virtual node" mechanism. All the blocks of the transaction node are stored in the virtual node of the cloud corresponding to the node, which indirectly realizes the block security with the help of cloud security. 3. The structure of the block is transparent to the regulatory node. The regulatory node has a timing patrol mechanism, which can regularly verify the same block stored in several transaction related nodes (actually their cloud virtual nodes). Once any forgery or tampering is found, the block rebuilding mechanism is started to realize block synchronization.
In the next step, we plan to further improve the following technical methods so as to improve the security of transaction nodes, the whole system and blocks: (1) decentralized identity based on zero knowledge proofs; (2) secure multiparty computing for privacy enhanced; (3) Layer 2 solutions based on sidechains.
Experimental Environment and Data Preparation
In our experiment, we use a PC with Intel Core i5 Quad Core CPU, 500 GB HDD, and 8 GB memory. The operating system is Windows 8.1, the Redis version is 3.2.8, and the Java JDK version is 1.8.0_131. We use Jedis, the client implementation of the Java version of Redis, to implement operations such as retrieval queries on Redis.
Since the MTMCB system has not been put into practical use for the time being, this experiment ignores steps such as consensus verification, and successively generates 10, 100, 1000, 5000 pieces of data into the chain to realize the extended storage processing of the block while entering the chain.
The file information related to the method in this paper is shown in Figure 13. In this experiment, for the above simulation data and file content, the effects of the original sequential retrieval, the fast retrieval supported by block name file and the fast retrieval supported by Redis cache are compared. Figure 13. The schematic diagram of "Block Name File" extended by RNS.
Block Retrieval Performance Test and Analysis
In order to test the retrieval efficiency of the two proposed retrieval methods named RBR (Application of Redis in Block Retrieval) and BRBR (Application of B+-tree and Redis in Improved Block Retrieval), we choose the traditional block sequential retrieval method (BR) as a reference comparison, including the amount of data is equal in all scenarios, so we can directly compare the retrieval efficiency of the algorithms. In the experiment, the B+-tree is 8-order, and the transaction user is randomly selected from the user file.
As can be seen in Table 3, the time consumption of the three block retrieval algorithms depends on the number of users and the number of block files. We compare the retrieval time of various block retrieval files under different user scales and block file sizes.
For two transaction users and of 500-50,000 block files, the retrieval performance is evaluated. The experimental range of block number is initially 500. In view of the adaptability of the algorithm under the larger block scale, the range is enlarged to 1000 (block size > 4000) and 5000 (block size >10,000), as shown in Table 3. Considering that different numbers of user sizes will also have an impact on the retrieval efficiency, we now select block sizes 1500, 3000, 5000, and 10,000 and retrieval user sizes are from 2, 3, 4, 5, 6, 8, and 10 (randomly selected combinations from user files).
Since the RBR and BRBR algorithm already are very fast, in order to better compare the growth trend of the algorithm in retrieval time, the small absolute increases here turn into relative proportion. We compute and compare the metric as , where i is the number of users participating in the transaction, t2 is the retrieval time when the number of transaction users is 2, and Bi is the retrieval time when the number of transaction users = i. The results of data processing and comparison are shown in Table 4 and Figure 15. As can be seen from Figure 15, the traditional retrieval BR algorithm is the least affected by the number of transaction users, and the RBR algorithm is the most affected. In different block numbers, when the number of users is less than 3, the three retrieval algorithms are not affected by the number of blocks. However, when the number of users is more than 3, the RBR method is affected significantly, which shows that the complexity of calculation will increase with the number of users in the RBR algorithm.
Stability Analysis of Block Retrieval
The stability of retrieval refers to the relatively consistent retrieval efficiency for the same type of retrieval requirements. We investigate the retrieval stability of three retrieval algorithms. We set the number of transaction users to 2 and the size of block file to 5000. We randomly generate 20 groups of transaction user combinations and respectively find the first block satisfying the conditions in the three retrieval algorithms, as shown in Table 5 and Figure 16.
information is composed into ordered sets according to the transaction participants. Then, the B+-tree index is established for the transaction participants. When searching, the B+-tree index allows us to quickly find the ordered set, including the participants and block name. At last, we can get the information of the relevant block file quickly efficiently by finding the intersection of several related ordered sets. We analyze the time requirements of the retrieval algorithms in a comprehensive experiment. The efficiency and stability of the improved algorithm are verified by experiments.
This solution is currently only applied to MTMCB and may not be applicable to other blockchain systems. In the future, we will study how to apply the ideas in this article to private blockchain systems. | 7,903 | 2020-02-08T00:00:00.000 | [
"Computer Science"
] |
Evaluating a stochastic parametrization for a fast–slow system using the Wasserstein distance
Constructing accurate, flexible, and efficient parametrizations is one of the great challenges in the numerical modeling of geophysical fluids. We consider here the simple yet paradigmatic case of a Lorenz 84 model forced by a Lorenz 63 model and derive a parametrization using a recently developed statistical mechanical methodology based on the Ruelle response theory. We derive an expression for the deterministic and the stochastic component of the parametrization and we show that the approach allows for dealing seamlessly with the case of the Lorenz 63 being a fast as well as a slow forcing compared to the characteristic timescales of the Lorenz 84 model. We test our results using both standard metrics based on the moments of the variables of interest as well as Wasserstein distance between the projected measure of the original system on the Lorenz 84 model variables and the measure of the parametrized one. By testing our methods on reduced-phase spaces obtained by projection, we find support for the idea that comparisons based on the Wasserstein distance might be of relevance in many applications despite the curse of dimensionality.
Introduction
The climate is a forced and dissipative system featuring variability on a large range of spatial and temporal scales, as a result of many complex and coupled dynamical processes inside it (Peixoto and Oort, 1992;Lucarini et al., 2014a;Ghil, 2015).Numerical models are able to explicitly resolve only a relatively short range of such scales.In particular, it is crucial to derive efficient and accurate ways to surrogate the effect of dynamical processes occurring on the small spatial and temporal scales that are not explicitly resolved (e.g., because of excessive computational or storage costs) by the model.The operation of constructing so-called parametrizations is key to the development of geophysical fluid dynamical models and stimulates the investigation of the fundamental laws defining the multiscale properties of the coupled atmosphere-ocean dynamics (Uboldi and Trevisan, 2015;Vannitsem and Lucarini, 2016).Traditionally, the development of parametrizations boiled down to deriving deterministic empirical laws able to describe the effect of the small-scale dynamical processes.More recently, it has become apparent that it is important to include stochastic terms in the parametrization that are able to provide a theoretically more coherent representation of such effects and that lead, on a practical level, to an improved skill (Palmer and Williams, 2008;Franzke et al., 2015;Berner et al., 2017).A first way to derive or at least justify the need for stochastic parametrizations comes from homogenization theory (Pavliotis and Stuart, 2008), which leads to constructing an approximate representation of the impact of the fast scales on the slow variables as the sum of two terms, a mean field term and a white noise term.Such an approach suffers from the fact that one has to take the rather nonphysical hypothesis that an infinite timescale separation exists between the fast and the slow scale.As the climate is a multiscale system, such a methodology is a bit problematic to adopt.Yet, this point of view has been crucial in the development of methods aimed at deriving reduced order models Published by Copernicus Publications on behalf of the European Geosciences Union & the American Geophysical Union.
for a system of geophysical interest (see, e.g., Majda et al., 1999Majda et al., , 2001Majda et al., , 2003;;Franzke et al., 2005).Mori et al. (1974) and Zwanzig (1960Zwanzig ( , 1961) ) analyzed, in the context of statistical mechanics, the related problem of studying how one can project the effect of a group of variables, with the goal of constructing effective evolution equations for a subset of variables of interest.They reformulated the dynamics of such variables expressing them as a sum of three terms: a deterministic term, a stochastic forcing, and a memory term.The memory term defines a non-Markovian contribution where the past states of the variables of interest enter the evolution equation.In the limit of infinite timescale separation, the last term tends to zero, whilst the random forcing approaches the form of (in general, multiplicative) white noise.
The triad of terms -deterministic, stochastic, and non-Markovian -was also found by Wouters and Lucarini (2012), who proposed a method (we refer to it in what follows as WL parametrization) for constructing parametrizations based on the Ruelle response theory (Ruelle, 1998(Ruelle, , 2009)).They interpreted the coupling between the variables of interest and those one wants to parametrize as a weak perturbation of the otherwise unperturbed dynamics of the two groups of variables.A useful feature of this approach is that it can be applied on a wide variety systems that do not feature a clear-cut separation of scales.The parametrizations obtained along these lines match the result of the perturbative expansion of the projection operator introduced by Mori and Zwanzig for describing the effective dynamics of the variables of interest (Wouters andLucarini, 2013, 2016).Another quality of the WL parametrization is that it is not tailored to optimize the representation of the statistics of some specific statistical property, but rather approximates coherently well all observables of the system of interest.This method has already been successfully tested in simple to intermediate-complexity multiscale models by Wouters et al. (2016); Demaeyer andVannitsem (2017), andVissio andLucarini (2018).
Conceptually similar results have been found through bottom up, data driven approaches, by Kravtsov et al. (2005); Chekroun et al. (2015a, b), andKondrashov et al. (2015).Specifically, Kravtsov et al. (2005) constructed effective models from climatic time series through an extension of the nonlinear case of the multilevel linear regressive method, while Kondrashov et al. (2015) showed how non-Markovian data-driven parametrizations emerge naturally when we consider partial observations from a large-dimensional system.
Even when a parametrization is efficient enough to represent unresolved phenomena with the desired precision, problems arise when it comes to dealing with scale adaptivity.Re-tuning the parametrization to a new set of parameters of the model usually means running again long simulations, adding further computational costs.For this reason the development of a scale-adaptive parametrization is considered to be a central task in geosciences (Arakawa et al., 2011;Park, 2014;Sakradzija et al., 2016).In a previous paper, the authors demonstrated the scale adaptivity of the WL approach by testing it in a mildly modified version of the Lorenz 96 model (Lorenz, 1996).A further degree of flexibility of this approach has been explored in another recent publication (Lucarini and Wouters, 2017), where the authors provided explicit formulas for modifying the parametrization when the parameters controlling the dynamics of the full system are altered.
In this paper, we wish to apply the WL parametrization to a simple dynamical system introduced by Bódai et al. (2011) and constructed by coupling the Lorenz 84 (Lorenz, 1984) model with the Lorenz 63 (Lorenz, 1963) model.In what follows, we want to parametrize the dynamical effect of the variables corresponding to the Lorenz 63 system on the variables corresponding to the Lorenz 84 system.We analyze two different scenarios, where the Lorenz 63 model acts first as a fast and then as a slow forcing, taking into account that the WL parametrization is adaptive and able to seamlessly treat both of them.Compared to what was studied in Vissio and Lucarini (2018), the models investigated here have simpler dynamics, as they are not spatially extended and their coupling is simpler, since it is only one-way.Nonetheless, we propose a significant advance with respect to our previous work in terms of methodology for evaluating the performance of the parametrization.We wish to extend what was studied in Vissio and Lucarini (2018) by focusing on a systematic comparison of the properties of the projected measure of the original coupled system on the subspace spanned by the variables of the Lorenz 84 model with the actual measure of the parametrized model.In particular, we will study the Wasserstein distance (Villani, 2009) between the coarsegrained estimates of the two 3-dimensional invariant measures.Additionally, we will look at the Wasserstein distance of the measures obtained by projecting onto two of the three variables of interest, which allows for a comprehensive evaluation of how different the one-time statistical properties of the two systems are.The Wasserstein distance has been proposed by Ghil (2015) as a tool for studying the climate variability and response to forcings, and applied by Robin et al. (2017) in a simplified setting.
In Sect. 2 we thoroughly describe the individual models and the full coupled model, while in Sect. 3 we briefly review Wouters and Lucarini parametrization and its application to the Lorenz 84-Lorenz 63 coupled model.Section 4 is dedicated to discussing the Wasserstein distance and in particular (a) whether it is efficient in summarizing the quality of the parametrization, (b) how sensitive our analysis is to the coarse graining of the phase space, and (c) whether useful conclusions can be drawn by looking at the problem in a projected space of two variables only.Section 5 provides the main results of our analysis.In the last section we draw our conclusions and propose future investigations.
Lorenz 84
The Lorenz 84 model (Lorenz, 1984) provides an extremely simplified representation of the large-scale atmospheric circulation: where the variable X describes the intensity of the westerlies, while the variables Y and Z correspond to the two phases of the planetary waves responsible for the meridional heat transport.Thus, Eq. ( 1) describes the evolution of the westerlies, subject to the external forcing F 0 , dampened both by the linear term −aX and by nonlinear interaction with the eddies −Y 2 and −Z 2 .This interaction amplifies the eddies through the terms XY and XZ in Eqs. ( 2)-(3).Furthermore, the eddies are affected by the westerlies through the terms −bXZ and bXY .The constant b regulates the relative timescale between displacements and amplifications.In Eqs. ( 2)-(3) we can, as in Eq. ( 1), see a linear dissipation, whilst the symmetry between the two equations is broken by the external forcing G.
Lorenz 63
The Lorenz 63 model is probably the most iconic chaotic dynamical system (Saltzman, 1962;Lorenz, 1963;Ott, 1993) and was developed through a severe truncation of the partial differential equations describing the Rayleigh-Bénard problem (e.g., see Hilborn, 2000 for a complete, yet simple, derivation of the model) and describe the evolution of three modes corresponding to large-scale motions and temperature modulations in the Rayleigh-Bénard problem.The three equations are the following: where x, y, and z are proportional, respectively, to the intensity of the convective motions, to the difference between temperatures of upward and downward fluid flows, and to the difference of the temperature in the center of a convective cell with respect to a linear profile (since Eqs.5-6 derive from the thermal diffusion equation).The constants s, ρ, and β are constants which depend on kinematic viscosity, thermal conductivity, depth of the fluid, gravity acceleration, and thermal expansion coefficient; specifically, s is also known as the Prandtl number.
Coupled model
The full model used in this paper, proposed by Bódai et al. (2011), is constructed by coupling the two low-order models introduced before as follows.The Lorenz 63 system acts as a forcing for the Lorenz 84 system, which represents the dynamics of interest.The dynamics of the two systems have a timescale separation given by the factor τ and can be written as follows: It is important to underline that the coupling between the Lorenz 84 and the Lorenz 63 is unidirectional: the latter model affects the former and, acts as an external forcing, with no feedback acting the other way around.
In what follows, we choose fairly classical values for the parameters: a = 0.25, b = 4, s = 10, ρ = 28, and β = 8/3; the two forcings are set as F 0 = 8 (corresponding to the socalled winter conditions) and G = 1.The parameter h is a modulation coefficient that defines the coupling strength and we choose h = 0.25 in order to provide a stochastic forcing between two and four orders of magnitude smaller (on average) than the tendencies of the X variable (see below).The parameter τ defines the ratio between the internal timescale of the two systems: in the case of τ > 1, the Lorenz 63 provides a forcing that is typically on timescales shorter than those of the system of interest; while if τ < 1, the forcings can be interpreted as a modulating factor of the dynamics of the Lorenz 84 model.In the first case, in particular, we can interpret the Lorenz 63 as being the cause of the forcing exerted by convective motions in the synoptic-scale dynamics described by the Lorenz 84 model.The numerical integration scheme used is a Runge-Kutta 4 with a time step of 0.005 (Bódai et al., 2011).
Henceforth, we will refer to the standard Lorenz 84 model as the uncoupled model, whilst the Lorenz 84 subject to the coupling with the Lorenz 63 will be called the coupled model.Wouters and Lucarini (2012, 2013, 2016) presented a topdown method suitable for constructing parametrizations for chaotic dynamical systems in the form where K = (X, Y, Z) is the vector of the variables we are interested in and the J = ( x, y, z) is the vector of the variables we want to parametrize.The coefficient controls the strength of the couplings, i.e., K (K,J ) and J (K,J ).The parametrization is obtained assuming the chaotic reference (Gallavotti and Cohen, 1995) and applying Ruelle response theory (Ruelle, 1998(Ruelle, , 2009)); the effect of the coupling in Eq. ( 13) is approximated, up to the second order in , by three terms: the first order consists of a deterministic term, while the second order includes a stochastic forcing and a non-Markovian term.The general form of the parametrization (e.g., Vissio and Lucarini, 2018) is where D, S, and M indicate, respectively, the deterministic, stochastic, and memory terms and are defined below in Eqs. ( 18)-( 22).Note that the projection onto the variables of interest of invariant measure of the full system given in Eqs. ( 13)-( 14) and the invariant measure of the system give in Eq. ( 15) are the same up to second order in the coupling parameter , as discussed in Wouters and Lucarini (2013); Vissio and Lucarini (2018).Since the couplings are seen as a perturbation applied to an otherwise uncoupled system, the three terms in Eq. ( 15) can be calculated considering the statistical properties of the unperturbed equations The numerical integration of Eqs. ( 16)-( 17) may allow to use less computational resources with respect to Eqs. ( 13)-( 14), particularly in the case of multiscale systems.
As discussed in Vissio and Lucarini (2018), the WL parametrization has the remarkable feature of having a good degree of adaptivity in terms of changes to the timescale separation between the K and J variables, to be performed by rescaling, e.g., t → τ in Eq. ( 17).In this scale, the term D(K) in Eq. ( 15) is unchanged, while the timescale of the autocorrelation of the noise term S(K) and of the memory term M(K) are reduced by a factor τ/t.In the specific case of the Lorenz 96 system studied in Vissio and Lucarini (2018), the adaptivity is more general than the one related to changes in the timescale separation only, and points to the possibility of developing general adaptive parametrization schemes beyond such specific model.It is not yet clear whether this might lead to constructing spatial scale-adaptive parametrizations.
Constructing the parametrization
The coupling strength , shown in Eqs. ( 13)-( 14) and in Eq. ( 15), assumes the value = ah, while the coupling terms are, with respect to the vector (X, Y, Z) in Lorenz 84 phase space, K (K,J ) = K (J ) = ( x, 0, 0) and J (K,J ) = J (K) = (0, 0, 0).Note that this is a case of independent coupling -i.e., a coupling that depends only on the variable of the other equation -for which the application of the methodology is simpler than the dependent coupling case (Wouters and Lucarini, 2012).
The deterministic term D in Eq. ( 15) is a measure of the average impact of the coupling on the K dynamics and can be written as where ρ 0,x (A) (x = K,J ) is the expectation value of A computed according to the invariant measure given by the uncoupled dynamics of the x variables.In Eq. 18, we have used the expression of the coupling given in Eq. ( 7) and we have computed the ensemble average as a time average on the ergodic measure of x.Since the measure of Lorenz 63 is symmetric for x → − x, one could think of choosing D(K) = (0, 0, 0).Nevertheless, this is the limit for a run of infinite time length -in our case 146 000, 10 years in Lorenz models.Therefore, it seems appropriate to compute D using the time series given by the uncoupled Lorenz 63 and Eq. ( 18), as we do for the second order of the parametrization, see below.
Since the coupling shown in Eq. ( 7) depends only on one of the variables (in this case the x) of the system we want to parametrize, the stochastic term can be written as where the properties of ω(t), a stochastic noise, are defined by its correlation R(t) and its time average ω(t) : As discussed in Wouters andLucarini (2012, 2013), and Vissio and Lucarini (2018), for more complex couplings the stochastic term assumes the form of a multiplicative noise.
We have used the software package ARFIT (Neumaier and Schneider, 2001;Schneider and Neumaier, 2001) to construct time series of noise with the desired properties defined by Eq. ( 20).The last term in Eq. ( 15) is the non-Markovian contribution to the parametrization and can be written as follows: where As discussed in Sect.2.3, the evolution of the variables of the Lorenz 63 model -see Eqs. ( 10)-( 12) -are independent of the state of the variables corresponding to the Lorenz 84 model.As a result, the first factor on the right-hand side of Eq. ( 22) vanishes, so that the parametrization we derive is fully Markovian.
After the implementation of the Wouters and Lucarini procedure, Eq. ( 7) will be parametrized as Equation ( 23), together with Eqs. ( 8)-(9), will be henceforth indicated as the system constructed with second order parametrization, whilst the same equations without the stochastic term (therefore comprehending the first order, deterministic term only), namely will be called the first order parametrization.
Wasserstein distance
We wish to assess how well a parametrization allows us to reproduce the statistical properties of the full coupled system.
In this regard, it seems relevant to quantify to what extent the projected invariant measure of the full coupled model on the variables of interest differs from the invariant measures of the surrogate models containing the parametrization.In order to evaluate how much such measures differ, we resort to considering their Wasserstein distance (Villani, 2009).Such a distance quantifies the minimum "effort" in morphing one measure into the other, and was originally introduced by Monge (1781), somewhat unsurprisingly, to study problems of military relevance, and later improved by Kantorovich (1942).
Starting from two distinct spatial distributions of points, described by the measures µ and ν, we can define the optimal transport cost (Villani, 2009) as the minimum cost to move the set of points corresponding to µ into the set of points corresponding to ν: where c(x, y) is the cost for transporting one unit of mass from x to y and (µ, ν) is the set of all joint probability measures whose marginals are µ and ν.The function C(µ, ν) in Eq. ( 25) is called the Kantorovich-Rubinstein distance.In the rest of the paper, we will consider the Wasserstein distance of order 2: where d(x, y) is the Euclidean distance between x and y.
Euclidean distance is given by We can also define the Wasserstein distance also in the case of two discrete distributions where x i and y i represent the location of the different points, with mass given, respectively, by µ i and ν i .We can construct the order 2 Wasserstein distance for discrete distributions as follows: where γ ij is the fraction of mass transported from x i to x j .This latter definition of Wasserstein distance has already been proven effective (Robin et al., 2017) for providing a quantitative measurement of the difference between the snapshot attractors of the Lorenz 84 system in the instance of summer and winter forcings.
Hereby we propose to further assess the reliability of the WL stochastic parametrization by studying the Wasserstein distance between the projected invariant measure of the original system on the first three variables (X, Y, Z) and the invariant measures obtained using the surrogate dynamics corresponding to the first and second order parametrization.Nevertheless, since the numerical computations for optimal transport through linear programming theory are not cheap, a new approach is required.In order to accomplish it, we perform a standard Ulam discretization (Ulam, 1964;Tantet et al., 2018) of the measure supported on the attractor, by coarse graining on a set of cubes with constant sides across the phase space.We will discuss below the impact of changing the sides of such cubes.The coordinates of the cubes will then be equal to the location x i , while the corresponding densities of the points are used to define γ ij ; finally, we exclude from the subsequent calculation all the grid boxes containing no points at all.Our calculations are performed using a modified version of the software for Matlab written by Gabriel Peyré and made available at http://www.numerical-tours.com/matlab/optimaltransp_1_linprog/ (last access: 1 March 2018), conveniently modified to include the subdivision of the phase space in cubes and the assignment of corresponding density to those cubes.
Parametrizing the coupling with the Lorenz 63 model
In this section we show the results corresponding to the case τ = 5.Therefore, Lorenz 84 and Lorenz 63 are seen as the slow and the fast dynamical systems, respectively.
Qualitative analysis
We first provide a qualitative overview of the performance of the parametrization by investigating a few Poincaré sections, which provide a convenient and widely used method to visualize the dynamics of a system in a two-dimensional plot (Eckmann and Ruelle, 1985;Ott, 1993); typically, the plane chosen for the section of Lorenz 84 is Z = 0. Figure 1a shows the Poincaré section at Z = 0 of the variables X and Y of the coupled model given in Eqs. ( 7 Metaphorically, our parametrization aims at describing as accurately as possible the impact of "convection" on the "westerlies".It is insightful to look at how it affects the properties of the two variables -Y and Z -that are not directly impacted by it.This amounts to looking at the impact of the parametrization of "convection" on the "large-scale planetary waves" and, consequently, on the "large-scale heat transport".Therefore, we look into the X = constant Poincaré section, in order to highlight the properties of Y and Z.The four panels in Fig. 2 are structured as in Fig. 1 and depict the Poncaré section computed for X = 1.Also in this case, the second order parametrization provides a far better match to the coupled model, featuring a remarkable ability to reproduce the main features of the pattern of points. In order to provide further qualitative evidence of our results, in Fig. 3a-d we show the trajectories in the phase space of the X, Y , and Z variables for the four considered models.For the sake of clarity, the plots are created using just 5 years (365 time units).In the case of the coupled model, the attrac-G.Vissio and V. Lucarini: Evaluating a stochastic parametrization for a fast-slow system tor spans over more extreme values of the variables and the second order parametrization successfully imitates this feature, while the simple deterministic correction, once again, is completely inadequate.
Evaluation of the performance of the parametrization
Further to the qualitative inspection, we provide here quantitative comparisons to support our study.All the remaining simulations in this section are run for 100 years (7300 time units) with a time step of 0.005; thus, each attractor is constructed with 1 460 000 points.We have tested that the results presented below are virtually unchanged when considering a smaller time step of 0.001.We first look into the probability density functions (PDFs) of the variables X, Y , and Z, which describe, loosely speak-ing, our climate.Figure 4 shows the PDF of the X variable for the four considered models.As expected, the second order parametrization allows for reconstructing, with great accuracy, the statistics of the coupled model.The bimodality of the uncoupled Lorenz 84 model is reproduced by the model featuring the first order parametrization, while the second order model accurately predicts the unimodal distribution shown by the coupled model.The PDFs for variables Y and Z are shown in Figs. 5 and 6, respectively.Also here, where the external forcing does not destroy the bimodality of the distributions found in the uncoupled case, WL parametrization leads to a very good approximation of the properties of the coupled model.In particular, the tails of the distributions are represented with a high level of precision, making it possible to seemingly reproduce with good accuracy the extreme values of the variables.This is a matter worth investigating in Table 1.Expectation values for the ensemble average of the first two moments of the variables X, Y , and Z.The uncertainty is indicated as standard deviation over the ensemble of realizations with the corresponding standard deviations σ .All the values are multiplied by 10 2 .For case τ = 5, Lorenz 63 acts as a fast-scale model.
Observables
Uncoupled model 1st order parametrization 2nd order parametrization Coupled model 6.1 ± 0.8 6.5 ± 1.2 13.7 ± 0.7 13.9 ± 0.4 34.9 ± 0.8 35.2 ± 1.0 43.6 ± 0.7 43.5 a separate study.Note that, since the WL parametrization is constructed to have skill for all observables, it is not so surprising that it can also perform well far away from the bulk of the statistics (see discussion in Lucarini et al., 2014b).
Aside from the analysis of the PDF, a further statistical investigation can be provided by looking into the numerical results provided by first moments of the variables and their uncertainty, which is computed as the standard deviation derived from the analysis of an ensemble of runs.We have performed just 10 runs, but our results are very robust.The results for the statistics of the first two moments are reported in Table 1: all the quantities inspected clearly show that the second order parametrization allows for reproducing very accurately the moments statistics of the coupled model.
If the considered PDFs depart strongly from unimodality, the analysis of the first moments can be of little use, and it becomes hard to have a thorough understanding of the statistics by adopting this point of view.As discussed above, we wish to supplement this simple analysis with a more robust evaluation of the performance of the parametrizations by taking into account the Wasserstein distance.A first issue to deal with in order to compute the Wasserstein distance consists of carefully choosing the number of cubes used for the Ulam projection.Figure 7a shows the Wasserstein distance between the invariant measure of the coupled model projected onto the XY Z space and the invariant measure of the uncoupled Lorenz and of the models obtained using the first and second order parametrization.We find that for all choices of the coarse graining, the measure of the model with the second order parametrization is, by far, the closest to the projected measure of the coupled model.Instead, the deterministic parametrization provides a negligible im-provement with respect to the trivial case of considering the uncoupled model, as expected given the discussion following Eq.( 18).What is shown here gives a quantitative evaluation of the improved performance resulting from adding a stochastic parametrization.The second piece of information is that the estimated Wasserstein distance has only a weak dependence on the degree of the coarse graining and seems to approach its asymptotic value for the finest (yet still pretty coarse) Ulam partitions considered here.This is encouraging as the findings one can obtain at low resolution seem to be already very meaningful and useful.
A well-known problem of Ulam's method is the fact that it can hardly be adapted to high-dimensional spaces -this is the so-called curse of dimensionality.Additionally, evaluating the Wasserstein distance in high dimensions itself becomes extremely computationally challenging.In order to partially address these problems, we repeat the analysis shown in Fig. 7a for the measures projected onto the XY , XZ, and Y Z planes, thus constructing the so-called sliced Wasserstein distances.Results are reported in Fig. 7b-d, respectively.We find that, unsurprisingly, the distance of the projected measure is strictly lower than the distance in the full phase space, ceteris paribus.What is more interesting is that all the observations we made for Fig. 7a apply for the other panels.Therefore, it seems reasonable to argue that studying the Wasserstein distance for projected spaces might also provide useful information on the full system.In order to extend the scope of our study, we have repeated the analysis described above for the case τ = 1 6 .Such a choice implies that the model responsible for the forcing has an internal timescale which is larger than the one of the model of interest.We remark that the WL parametrization, as discussed in Vissio and Lucarini (2018), is not based on any assumption of timescale separation between the variables of interest and the variables we want to parametrize.We report below only the main results for the sake of conciseness.Figure 8a-d show the Poincaré sections in Z = 0 for all the considered models.In the case of the coupled system, most of the fine structure one finds in the uncoupled model is lost, and we basically have a cloud of points with weaker features than what is shown in Fig. 1 for τ = 5.Nonetheless, also in this case the model with the second order parametrization reproduces (visually) quite well what is shown in Fig. 8a and, in particular, shows matching regions where the density of the points is higher.The analysis performed considering the Wasserstein distance between the measures is shown in Fig. 9. Without going into details, one finds that the same considerations we made for τ = 5 are still valid for τ = 1 6 regarding the performance of the parametrization schemes and the role of coarse graining.Additionally, we observe that, for each choice of coarse graining, the distance between the measure of the parametrized models and the actual projected measure of the coupled model is larger for τ = 1 6 , thus indicating the parametrization procedure performs worse in this case.This fits with the intuition one can have by checking out how well Fig. 8b-d reproduce panel (a) in Fig. 8 vs. what one finds in the case of Fig. 1.
Conclusions
Developing parametrizations able to surrogate efficiently and accurately the dynamics of unresolved degrees of freedom is a central task in many areas of science, and especially in geosciences.There is no obvious protocol in testing parametrizations for complex systems, because one is bound to look only at specific observables of interest.This procedure is not error-free, because optimizing a parametrization against one or more observables might lead to unfortunate effects on other aspects of the system and worsen, in some other aspects, its performance.
In this paper we have addressed the problem of constructing a parametrization for a simple yet meaningful twoscale system, and then testing its performance in a possi-bly comprehensive way.We have considered a simple sixdimensional system constructed by coupling a Lorenz 84 system and a Lorenz 63 system, with the latter acting as forcing to the former, and the former being the subsystem of interest.We have included a parameter controlling the timescale separation of the two systems and a parameter controlling the intensity of the coupling.We have built a first order and a second order parametrization able to surrogate the effects of the coupling using the scale-adaptive WL method.The second order scheme includes a stochastic term, which has proved to be essential for radically improving the quality of the parametrization with respect to the purely deterministic case (first order parametrization), as already visually shown by looking at suitable Poincaré sections.
We show here that, in agreement with what was discussed in previous papers, the WL approach provides an accurate and flexible framework for constructing parametrizations.Nonetheless, the main novelty of this paper lies in our use of the Wasserstein distance as a comprehensive tool for measuring how different the invariant measures ("the climates") of the uncoupled Lorenz 84 model, and of its two versions with deterministic and stochastic parametrizations are from the projection of the measure of the coupled model on the variables of the Lorenz 84 model.We discover that the Wasserstein distance provides a robust tool for assessing the quality of the parametrization, and, quite encouragingly, meaningful results can be obtained when considering a very coarsegrained representation of the phase space.A well-known issue with using a methodology like the Wasserstein distance is the so-called curse of dimensionality: the procedure itself becomes unfeasible when the system has a number of degree of freedom above a few units.We have addressed (partially) this issue by looking at the Wasserstein distance of the projected measures on the three two-dimensional spaces spanned by two of the three variables of the Lorenz 84 model.We find that the properties of the Wasserstein distance in the reduced spaces follow closely those found in the full space.We maintain that diagnostics based on the Wasserstein distance in suitably defined reduced-phase spaces should become standard in the analysis of the performance of parametrizations and in intercomparing models of any level of complexity.
Code and data availability.The data used for plotting the figures contained in the paper were generated using codes available from Gabriele Vissio upon request.
Competing interests.The authors declare that they have no conflict of interest.Special issue statement.This article is part of the special issue "Numerical modeling, predictability and data assimilation in weather, ocean and climate: A special issue honoring the legacy of Anna Trevisan (1946Trevisan ( -2016)".It is not associated with a conference.
)-(12).Panel (b) of the same figure shows the Poincaré section of the Lorenz 84 model obtained by removing the coupling with the Lorenz 63 model.Finally, panels (c) and (d) show the Poincaré sections of the modified Lorenz 84 models obtained by adding the first and second order parametrization, respectively.Visual inspection suggests that the second order parametrization does a good job in reproducing the properties of the full coupled model.
Figure 3 .
Figure 3.A 3-D view of the attractor of (a) coupled model, (b) uncoupled model, (c) 1st order parametrization, and (d) 2nd order parametrization.For case τ = 5, the Lorenz 63 model acts as a fast forcing on the Lorenz 84 model.
Figure 4 .
Figure 4. Probability density of the X variable.For case τ = 5, the Lorenz 63 model acts as a fast forcing on the Lorenz 84 model.
Figure 5 .
Figure 5. Probability density of the Y variable.For case τ = 5, the Lorenz 63 model acts as a fast forcing on the Lorenz 84 model.
Figure 6 .
Figure 6.Probability density of the Z variable.For case τ = 5, the Lorenz 63 model acts as a fast forcing on the Lorenz 84 model.
Figure 7 .
Figure 7. Wasserstein distances from the coupled model with respect to number of cubes per side: (a) 3-D case, (b) projection onto XY plane, (c) projection onto XZ plane, and (d) projection onto Y Z plane.For case τ = 5, the Lorenz 63 model acts as a fast forcing on the Lorenz 84 model.
Figure 9 .
Figure 9. Wasserstein distances from the coupled model with respect to number of cubes per side: (a) 3-D case, (b) projection on XY plane, (c) projection on XZ plane, and (d) projection on Y Z plane.For case τ = 1 6 , the Lorenz 63 model acts as a slow forcing on the Lorenz 84 model. , | 8,373.4 | 2018-06-19T00:00:00.000 | [
"Physics"
] |
Robust H‘ Feedback Compensator Design for Linear Parabolic DPSs with Pointwise/Piecewise Control and Pointwise/ Piecewise Measurement
School of Automation and Guangdong-HongKong-Macao Joint Laboratory for Smart Discrete Manufacturing, Guangdong University of Technology, Guangzhou, Guangdong 510006, China School of Automation and Guangdong Discrete Manufacturing KnowledgeAutomation Engineering Technology Research Center, Guangdong University of Technology, Guangzhou, Guangdong 510006, China School of Cybersecurity, Northwestern Polytechnical University, Xi’an, Shanxi 710072, China National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, Xi’an, Shanxi 710072, China
Introduction
Distributed parameter systems (DPSs) are infinite-dimensional in nature and are generally modeled by partial differential equations (PDEs). DPSs are widely used in engineering systems [1][2][3][4], such as thermodynamics, chemical engineering, missile, aerospace, aviation, and nuclear fission and fusion. Control problem of DPSs has attracted extensive attention due to the important applications in engineering systems, such as the vibration control of flexible structures that the vibration process can be described by Euler-Bernoulli equations, the diffusion control of oil spill that the diffusion phenomena can be described by diffusion equations, and the temperature control of heating furnace that the thermal conduction process can be described by heat equations. In recent decades, fruitful achievements in the design of DPSs control have been achieved from many scholars all over the world [5][6][7][8][9][10][11][12][13][14][15][16][17].
Generally, control forms of DPSs can be divided into boundary control and in-domain control based on the actuators' location. Fruitful achievements on boundary control of DPSs that the actuators are located at the boundary area have been published already. For example, the boundary control problem of flexible robot manipulators has been developed to solve the DPSs with flexible structures [18][19][20].
is technique has been extended to the boundary antidisturbance control and boundary adaptive robust control for flexible DPSs [21][22][23]. Boundary control of 1D nonlinear parabolic DPSs has been studied in [24], in which the continuum backstepping method is utilized. Boundary feedback control of DPSs has been addressed in [25], and a novel combination of feedback idea and backstepping approach is presented in [26]. Sampled-data boundary control and sliding mode boundary control of DPSs have been studied in [27,28], where a sampled-data strategy for the boundary control problem is proposed. Fuzzy boundary control based on the T-S fuzzy DPS model is shown in [29,30]. H ∞ boundary control has been proposed in [31] that a linear matrix inequality (LMI) approach has been utilized. Meanwhile, there are also some achievements on indomain control of DPSs. For example, pointwise control of DPSs with T-S fuzzy DPS model has been developed in [32], where a fuzzy state feedback controller is designed. Furthermore, this technique has been extended to the [33,34]. Robust sampled-data control has been proposed in [35,36], where the sampled-data pointwise controller method is applied. Mobile piecewise control of 1D DPSs has been studied in [37] that a mobile actuator-plus-sensor network is developed, and this technique has been extended to solve the 2D DPSs in [38]. More recently, collocated control and noncollocated control of in-domain control in DPSs have been studied deeply. Static collocated feedback control has been presented in [39,40] that the static collocated pointwise and piecewise feedback controller has been designed for parabolic DPSs. For the noncollocated control that the actuators and sensors can never be placed at the same location exactly, the static feedback control has been studied in [32,36,41,42], and the observer-based dynamic feedback control has been designed in [43][44][45][46][47]. e estimation problems in controller design of DPSs have been studied in [48][49][50][51][52][53], and for some DPSs with unknown parameters, parameter estimation methods have been applied in [54][55][56][57][58]. e design and analysis methods have also been extended to switched control systems and filtering technique in [59][60][61][62]. Although there have been many promising efforts, there are still many control problems of DPSs need to be studied in the future.
In general, disturbance problems of DPSs are unavoidable because of the errors from model calculations and equipments.
us, an approach of robust H ∞ control is proposed to deal with the control problem of DPSs with external disturbances. e robust H ∞ control has attracted much attention from many scholars over the past few decades. For example, an H ∞ static output feedback boundary controller for semilinear parabolic and hyperbolic DPSs is proposed in [31].
is idea has extended to solve the sampled-data distributed H ∞ control problem for a class of parabolic DPSs in [35]. An H ∞ fuzzy observer-based controller is proposed for a class of nonlinear parabolic DPSs in [63], and this technique has developed to the observerbased H ∞ sampled-data fuzzy control design in [46,64] and mixed H 2 /H ∞ fuzzy observer-based feedback control design in [65]. In this paper, we will extend the works in [66,67] to design the H ∞ output feedback compensator for linear parabolic DPSs with external disturbances by using a unified Lyapunov approach. A sufficient condition for the static H ∞ feedback compensator can stabilize the DPSs under an H ∞ performance constraint with the collocated observation case which is first proposed in terms of standard linear matrix inequalities (LMIs); then, another sufficient condition for the observer-based dynamic H ∞ feedback compensator can stabilize the DPSs under an H ∞ performance constraint with the noncollocated observation case which is developed by using the Lyapunov direct method, Poincaré-Wirtinger inequality's variants, Cauchy-Schwartz inequality, integration by parts, and first mean value theorem for definite integrals.
e main contributions and novelty of this paper compared with the existing works before are summarized as follows: (i) Different from the results in [32-34, 43, 44] (iii) An H ∞ performance constraint in the sense of | · | 2 is proposed to deal with the external disturbance of the model and measurement disturbance in the measurement output.
e organizational structure of the remaining parts of this paper is arranged as follows: first, the problem formulation of this paper and some preliminary knowledge are presented in Section 2. en, the static output feedback compensator design and observer-based dynamic output feedback compensator design in terms of collocated and noncollocated observation in space satisfying the H ∞ performance constraint are shown in Section 3. Section 4 provides some numerical simulation results of the corresponding closed-loop systems to show the effectiveness of the proposed design method. Finally, brief conclusions are followed in Section 5.
Problem Formulation and Preliminaries
In this paper, we consider a class of one-dimensional linear parabolic DPSs with external disturbances of the following form: subject to the Robin boundary conditions in one dimension, and the initial condition, where z ∈ [0, L] ⊂ R denotes the spatial position between is a set of measurement outputs from n sensors, expressed as y(t) ≜ y 1 (t) y 2 (t) · · · y n (t) T ∈ R n . s(z) ≜ s 1 (z) s 2 (z) · · · s n (z)] T ∈ R n is a known integrable vector function of z, and the element s j (z) describes the distribution of j-th sensor on the spatial domain [0, L].
It should be pointed out that when η > 0.25π 2 L − 2 , the onedimensional linear parabolic DPSs is unstable.
Remark 1.
It is worth noting that equation (1) is equivalent to the following general form [68]: through the conversion of the following state variables and control variables: is a known scalar function and continuously differentiable in time t.
In practical applications of DPSs, the number of actuators and sensors is usually limited and actives at specified point or part thereof in the spatial domain, respectively. erefore, the in-domain control forms of DPSs are generally divided into pointwise control and local piecewise control according to the distribution of actuators. In this paper, two forms of in-domain control are both considered; the actuators' spatial distribution functions h i (z) are described as follows: where δ(·) is the Dirac delta function [69]. e points which imply the chosen functions g i (z), i ∈ M, produce pointwise control at the points z i and local piecewise uniform control over [z − i , z + i ], respectively. Meanwhile, the spatial domain [0, L] can be divided into m parts by a spatial domain decomposition approach that 0 � z 1 < z 2 · · · < z m+1 � L. e locations of the actuators for pointwise control and local piecewise control satisfy the Similar to the actuators' distribution, the in-domain observation forms are generally divided into pointwise measurement and local piecewise measurement; the sensors' spatial distribution functions s j (z), j ∈ N are described in this paper as follows: Complexity 3 e points z j , j ∈ N, and local subdomains which imply the chosen functions c j (z), j ∈ N produce pointwise observation at the points z j and local piecewise uniform observation over [z − j , z + j ], respectively. At the same time, the spatial domain [0, L] can be divided into n parts by a spatial domain decomposition approach that 0 � z 1 < z 2 · · · < z n+1 � L. e locations of the sensors for pointwise measurement and local piecewise measurement satisfy the relationships For the linear parabolic DPS (1)-(3), the following H ∞ performance constraint is proposed under the zero initial condition u 0 (·) � 0: where c 1 > 0 and c 2 > 0 are the prescribed H ∞ attenuation levels.
For the development of stability analysis in this paper, two exponential stability definitions in the sense of | · | 2 of the linear parabolic DPS (1)-(3) are defined.
e linear parabolic DPS (1)-(3) with the designed output feedback compensator is exponentially stable in the sense of | · | 2 under an H ∞ performance constraint, when the corresponding closed-loop DPS with d(·, t) � 0 and ω(t) � 0 is exponentially stable in the sense of | · | 2 ; meanwhile, the H ∞ performance constraint in (10) is ensured when the initial value of u(z, t) is zero (u 0 (z) � 0) and all d(·, ·) ∈ L 2 (0, ∞; H), ω(·) ∈ L 2 (0, ∞; R n ). e following lemmas are very useful for the development of the robust H ∞ compensator design in this paper.
Robust H ' Feedback Compensator Design
Based on the distributions of actuators and sensors that h i (z), i ∈ M, in (6) (or (7)) and s j (z), j ∈ N, in (8) (or (9)), the observation obtained from sensors can be divided into collocated observation in space (i.e., h(z) � s(z)) and noncollocated observation in space (i.e., h(z) ≠ s(z)). In other words, the collocated observation in space is a special case of noncollocated observation. Meanwhile, the noncollocated observation in space (i.e., h(z) ≠ s(z)) consists of the following cases: pointwise control and noncollocated pointwise observation case, pointwise control and noncollocated piecewise observation case, piecewise control and noncollocated pointwise observation case, and piecewise control and noncollocated piecewise observation case. In this section, all the noncollocated observation cases will be considered to study the robust H ∞ dynamic output feedback compensator design for the DPS (1)-(3).
A new type of Luenberger-type observer for the DPS (1)-(3) is constructed as follows: where u(z, t) denotes the state of the observer; 0 < L ≜ diag l 1 , l 2 , . . . , l n ∈ R n×n is the observer gain to be determined. e observation functions s(z) ≜ s 1 (z) s 2 (z) · · · s n (z) T are choosen as such that 0 � z 1 < z 2 < · · · < z n < z n+1 � L. en, we design an observer-based dynamic output feedback compensator of the following form: where 0 < K ≜ diag k 1 , k 2 , . . . , k m ∈ R m×m is the compensator gain in the form of m × m diagonal matrix, and the compensator functions h(z) ≜ h 1 (z) h 2 (z) · · · h m (z)] T ∈ R m are choosen as 4 Complexity such that 0 � z 1 < z 2 < · · · < z m < z m+1 � L. e estimation error state is defined as From formulas (1)- (3) and (13)- (17), the estimation error system is represented as where the initial value e 0 (z) ≜ u 0 (z) − u 0 (z).
Substituting the designed dynamic feedback compensator (15) and the estimation error state (17) into the DPS (1)-(3), the following closed-loop system is obtained as follows: Hence, the closed-loop coupled DPS is represented by the estimation error system (18) and the closed-loop system (19) with expressions (6) (or (7)) and (8) (or (9)). e objective of this subsection is to seek an effective method to design an observer-based dynamic output feedback compensator such that the resulting closed-loop coupled DPS is exponentially stable under an H ∞ performance constraint in the sense of | · | 2 with prescribed H ∞ attenuation levels c 1 and c 2 .
Conclusions
In this paper, the robust H ∞ feedback compensator for a class of linear parabolic DPSs with external disturbances has been investigated in consideration of the pointwise/piecewise control and pointwise/piecewise measurement based on the distributions of the actuators and sensors. A new type of Luenberger observer is designed to solve the difficulty caused by noncollocated observation and track the state of the PDEs. It is different from the previous observer design method that all the cases of the pointwise/piecewise control and pointwise/piecewise measurement are considered via a defined unified distribution function. An observer-based dynamic output feedback compensator is designed and an H ∞ performance constraint is proposed under the zero initial condition. By utilizing Poincaré-Wirtinger inequality's variants, Cauchy-Schwartz inequality, integration by parts, and first mean value theorem for definite integrals, sufficient conditions on the exponential stability of the corresponding closed performance constraint in the sense of | · | 2 are presented in terms of LMI constraints. Finally, numerical simulation results of the resulting closed-loop systems are provided to illustrate the effectiveness of the proposed design strategy.
Data Availability e data of this paper come from the official website of sample enterprises, which can be obtained.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 3,357.2 | 2021-01-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Spinal Cryptococcoma Mimicking Spinal Cord Tumor Complicated by Cryptococcal Meningitis in an Immunocompetent Patient
There are very few cases in the literature on spinal cryptococcomas occurring alone without cerebral involvement at onset. Although cases of cerebral cryptoccomas followed by arachnoiditis have been reported, cases of spinal cryptococcoma followed by cryptococcal meningitis are exceedingly rare. We present a patient with spinal cord cryptococcoma who subsequently developed cryptococcal meningitis in a young immunocompetent patient. Early recognition and aggressive anti-fungal treatment is important.
Introduction
Cryptococcus is an encapsulated yeast, which allows them to have antiphagocytic and immunosuppresive abilities. There are over 50 species, but only C. neoformans var. neoformans (C. neoformans) and C. neoformans var. gattii (C. gatti) are considered principal pathogens in humans [1] and one of the most common fungal infections in the central nervous system [1]. C. neoformans (serotypes A, D and AD) and C. gattii (serotypes B and C) are the commonest subtypes.
C. neoformans is the most common species in the United States and other temperate climates throughout the world and is found in aged pigeon droppings [1,2]. C. gattii is mainly found in the tropics but is not associated with birds; it grows in certain species of eucalyptus trees (i.e., Eucalyptus camaldulensis, Eucalyptus tereticornis). C. neoformans causes cryptococcal infections in immunocompromised patients, whereas C. gatti usually infects immunocompetent patients [1][2][3][4][5].
The cryptococcal spores are inhaled by humans and get lodged in the lungs. During periods of impaired immunity, the spores disseminate to other organs especially the central nervous system (CNS). Meningitis is the commonest manifestation of cryptococcosis of the CNS [1][2][3][4][5][6][7]. There are very few cases in the literature on spinal cryptococcomas occurring alone without cerebral involvement at onset. Although cases of cerebral cryptoccomas followed by arachnoiditis have been reported, cases of spinal cryptococcoma followed by cryptococcal meningitis are exceedingly rare [3][4][5][6]. We present a young immunocompetent male patient with spinal cord cryptococcoma who subsequently developed cryptococcal meningitis.
Case Report
A 19 year old adolescent presented with acute onset of weakness of the lower limbs following a fall at home. He was admitted to a neighbouring state hospital and referred to our centre for further management after three weeks of symptoms. There was no history of prior weakness of the lower limbs or sphincter involvement and patient was in good health. General physical examination revealed an alert and oriented young man.
Neurological examination of the upper limbs was normal. Lower limb examination revealed increased tone and clonus bilaterally. Power over the lower limbs were Medical research council scale (MRC) 3/5 on the left and MRC 1/5 on the right with extensor plantar responses bilaterally. Pin-prick sensation, proprioception and vibration were reduced till the level of L1 bilaterally. There were no signs of meningeal irritation. Blood tests including full blood count, erythrocyte sedimentation rate, tumour markers, HIV and viral hepatitis screens were normal. Chest X-ray and X-ray of the thoraco-lumbosacral spine were unremarkable. Urgent MRI of the whole spine {Sagittal T2 weighted imaging (WI)} showed a ring-like intramedullary lesion within the thoracic cord asymmetrically occupying the entire central, lateral and posterior portions of the cord. The lesion had a central hypointensity with surrounding cord oedema and ring enhancement post contrast.
The lesion was initially suspected to be a spinal cord tumour, possibly an ependymoma (Figures 1a and 1b).
The patient was operated within 48 h of hospitalisation to our tertiary hospital. T11 laminoplasty and excision of the tumour was performed. Post operatively, the patient's general consciousness remained the same. However, the power of the lower limbs deteriorated to MRC 2/5 on the left and MRC 0/5 on the right side. He was diagnosed with post-operative oedema, and was given dexamethasone 4 mg eight hourly. One week later, there was improvement of the lower limb power to 3/5 bilaterally.
Biopsy of the cord lesion surprisingly showed granulomas, multinucleated giant cells, lymphocytes with central necrosis and encapsulated organisms positive for periodic acid Schiff (which stains for the cell wall and body of fungal elements) and Gomorri methenamine silver (GMS) stain (which stains exclusively for fungal organisms causing cell walls to stain black or brown). These histopathological changes,combined with the findings on histochemical stains and subsequent cultures were consistent with chronic granulomatous inflammation secondary to C. neoformans infection though the patient did not have any signs and symptoms of systemic fungal infection. (Figures 2a-2c) Antifungal treatment with intravenous amphotericin and fluconazole was started rather than Flucytosine which has been reported to result in quicker clearance of viable yeasts from the central nervous system as the latter drug was not readily available at our center that time. Urgent CT brain (plain) showed early non-obstructive hydrocephalus ( Figure 3). Lumbar puncture done revealed raised cerebrospinal fluid (CSF) opening pressure of >30 cm H 2 0 with high CSF protein (1.0 g/dL) and low CSF sugars (0.6 mg/dL). CSF Indian Ink for Cryptococcus was positive for encapsulated yeasts and CSF cryptococcal antigen was positive. CSF cryptococcal culture grew C. neoformans. The patient was treated aggressively with intravenous mannitol, lumbar punctures, neurosurgical intervention, intravenous amphotericin B and fluconazole. However, eventually, the patient developed nosocomial sepsis with prolonged hospitalization leading to a fatal outcome after one month despite aggressive antifungal treatment. Early recognition of spinal cord cryptococcomas with development of cryptococcal meningitis as a delayed complication is important.
Discussion
Acute onset presentation of intramedullary spinal cord cryptococcoma is a rare initial presentation of cryptococcal infection. We are describing an unusual case of a patient with a solitary spinal cord cryptococcoma followed by cryptococcal meningitis in a young immunocompetent patient occurring in the absence of cerebral cryptococcomas. Prior to this, he did not have fever, systemic complaints, headaches, vomiting and seizures to suggest central nervous system infection or raised intracranial pressure at the onset. This immunocompetent patient presented acutely with sudden onset of lower limb weakness as a result of the spinal cryptococcoma. Though his eventual outcome was poor, this case highlights important learning points to consider as the initial lesion mimicked a cord tumor in the absence of systemic complaints and the presence of disseminated disease as a complication of the initial pathology. Infection with C. gattii usually affects immunocompetent patients. It presents with a granulomatous inflammatory response, mainly in tropical and subtropical regions resulting in chronic disease as compared to C. neoformans. Subjects infected with C. gattii are at risk of intracerebral mass lesions (e.g. cryptococcomas) as well as meningitis, encephalomyelitis and optic nerve atrophy. In comparison, C. neoformans is present worldwide. It has a tendency to cause infection in predominantly immunodeficient patients (90%) though not exclusively limited to these types of patients, leading to acute meningitis or meningoencphalitis with a higher mortality. Patients with C. gatti are characterized by less mortality as an outcome but with neurological sequelae, the need for surgery and prolonged therapy [1][2][3][4][5]. In our case, C. neoformans infection of the spinal cord developed in an otherwise immunocompetent young man which again is against what is commonly reported in the literature. This underscores certain differences in presentation locally in Malaysia. The subject progressed to develop cryptococcal meningitis despite antifungal therapy and surgical intervention possibly due to dissemination of disease.
The intramedullary lesions of spinal cryptococcosis are described as cryptococcomas [1][2][3][4][5][6][7][8]. Intramedullary spinal cryptococcomas are less common compared to intradural and other types of extradural lesions [8]. In the literature there have been a few reports of spinal intramedullary cryptococcomas. In these rare reports, the majority presented with paraparesis and pyramidal signs in the lower limbs with a sensory level in the thoracolumbar region [6][7][8]. Shen et al. reported a similar rare case of a conus medullaris cryptococcoma in an immunocompetent host in the absence of meningoencephalitis with symptoms mimicking an intramedullary spinal cord tumor. Their patient had a good outcome with surgical removal in the absence of dissemination of the lesion followed by post-operative anti-fungal treatment [6]. As in our case, the lesion was initially thought to be a tumor. Majority developed the lesions in the thoracic or upper lumbar regions and were diagnosed after a biopsy [6][7][8]. The cryptococcomas on MRI spine appeared as hyperintense lesions with oedema and ring enhancement, similar to our patient. The survival rate for these patients if limited to the spine was high [6][7][8][9][10]. The incidence of spinal cryptococcomas with cerebral meningitis is not well documented. In one of the earliest reports by Grosse et al., a young female patient developed lumbar myelitis due to an intramedullary lumbar cryptoccoma and cerebral cryptococcomas necessitating combined therapy with antifungals, steroids (for the myelitis) and decompressive laminectomy which produced overall good outcome after 2 years of antimycotic treatment [3]. Aside from this early study, Stein et al. [12] reported four patients with mycotic spinal arachnoiditis and meningitis due to cryptococcal infection. Three of these patients developed acute fungal meningitis postoperatively and two out of the three died, similar to our case. None of them had cryptococcomas of the spine. Therefore mortality is higher when cerebral involvement is present [6][7][8][9][10][11][12]. In our patient the overall outcome despite correct diagnosis, dissemination of the disease and institution of aggressive antimycotic treatment was complicated by nosocomial septicaemia. In order to improve therapeutic outcomes, there have been a number of reports suggesting the possible benefits of intravenous amphotericin in combination with intrathecal amphotericin B. In a trial conducted by Yuchong et al., all 14 patients survived with no evidence of relapse during 1 year follow-up with intrathecal injection of amphotericin B or with low-dose therapy [13]. Therefore, this technique should be considered as it appears to be effective and safe in carefully selected patients though it was not tried in our patient.
Steroids were used early in this case in the presence of mass effect to help reduce the swelling caused by the tumor residing at the spinal region which was causing significant neurological deficits. There is a lack of strong evidence showing the risks or benefits of steroid use in severe CNS cryptococcal disease. However, there are multiple case reports including the case described by Grosse P et al mentioned earlier in this article [3][4][5][6][7][8][9][10][11]. As in the latter case, therapeutic success and good neurologic outcomes were achieved with the use of adjunctive steroids in non-HIV infected individuals with severe CNS cryptococcal infection especially when complicated by cryptococcomas in the cerebral cortex and spine to reduce the inflammatory response [6][7][8][9][10][11][12][13][14]. Nonetheless, more randomised control trials are needed to elucidate the cautious benefits versus the risks.
Early shunting by aggressive neurosurgical intervention has also been advocated by consensus guidelines even when the ventricles are not that large in order to improve outcomes and prevent the discomfort experienced by patients due to repeated lumbar punctures. In fact a number of studies have shown the benefits in terms of morbidity and mortality outcomes. The placement of a ventriculoperitoneal shunt, even in cases where the neuroimaging studies do not reveal presence of a hydrocephalus and in those with persistent CSF cryptococcal infection can address the problem of raised intracranial pressures [14]. Furthermore, depending on the response to medical treatment, decompressive laminectomy with local expansion of the dura also can be considered to prevent irreversible spinal cord damage as was done in this case.
In summary, we are presenting an interesting and rare case of an immunocompetent patient with spinal intramedullary cryptococcoma followed by acute cryptococcal meningitis postoperatively. Although rare, spinal cryptoccomas should be thought of in patients from endemic regions and when the MR imaging shows hyperintense lesions with ring enhancement involving thoracic and upper lumbar region [9]. This diagnosis needs to be entertained in young patients, as this disease is potentially treatable.
Informed Consent
Informed consent was taken for the purpose of publication of this case report, images and biopsy results. | 2,724.4 | 2017-06-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
Differential expansion and rectangular HOMFLY for the figure eight knot
Differential expansion (DE) for a Wilson loop average in representation $R$ is built to respect degenerations of representations for small groups. At the same time it behaves nicely under some changes of the loop, e.g. of some knots in the case of $3d$ Chern-Simons theory. Especially simple is the relation between the DE for the trefoil $3_1$ and for the figure eight knot $4_1$. Since arbitrary colored HOMFLY for the trefoil are known from the Rosso-Jones formula, it is therefore enough to find their DE in order to make a conjecture for the figure eight. We fulfil this program for all rectangular representation $R=[r^s]$, i.e. make a plausible conjecture for the rectangularly colored HOMFLY of the figure eight knot, which generalizes the old result for totally symmetric and antisymmetric representations.
Introduction
Chern-Simons (CS) theory [1] lies at the boundary between two very different worlds -of Yang-Mills and of topological theories. Because of this it serves as a bridge, allowing transfer of ideas and methods between the two fields. As topological theory, CS is exactly solvable -in the sense that any particular quantity (correlator) can be calculated, if one applies enough skill and effort. Even if there are some indications of chaos [2], they are well under control, in the spirit of [3]. It is not immediately like this in truly dynamical Yang-Mills theory, where quantities with regular behavior at all energy scales and/or all time moments are rather rare and difficult to identify. At the same time, observables in CS theory (known as knot polynomials [4]- [8]) depend on the same parameters -group and representations -as in generic Yang-Mills theory, and this provides a possibility to study these dependencies, separated from obscure space-time and energy-momentum properties. From this point of view of special interest are the aspects of knot-polynomial calculus, which rely not so much on topological invariance, but rather on the group-and representation theory properties, common for all Yang-Mills theories. Such are, for example, the quasiclassical and genus expansions (known as Vassiliev and Hurwitz expansions in knot theory) and the AMM/EO topological recursion [9] in the latter case (this time "topological" refers not to topological theory, but to the structure of Feynman diagrams and/or spectral surfaces -which are also characteristics of theories with real dynamics). In fact, these two do not exhaust the interesting structures in Yang-Mills theories -among less known the most intriguing is the differential expansion (DE). The word "differential" here refers to technical(?) connection to Khovanov's differential in the presentation of [10], which have a lot to do with the topological aspects of knot theory. However, the DE itself is rather a pure representation-theory property, reflecting the fact that different representations can occasionally coincide for small groups. Despite very simple, this fact provides unexpectedly much information about the observables (knot polynomials).
The study of DE actually began in [11], which was a part of a broad renewed attack on the problem of knot polynomials and Racah matrices [12]- [56]. In [11] DE was used to conjecture a general expression for HOMFLY and superpolynomials of the very simple figure eight knot 4 1 in all symmetric and antisymmetric representations. Later these formulas were extended to many more knots [32,33,35] and also used to obtain the exclusive Racah matrices [41,28] -which, after conjectured, provide a systhematic approach to calculations for all arborescent knots [50]- [56]. Despite this tremendous success, the DE method is thought to be too difficult and does not attract much attention -except for serious developments in [27,40,44,46]. It is the goal of the present paper to once again demonstrate its abilities. We do this by conjecturing an extension of [11] for 4 1 from symmetric and antisymmetric to arbitrary rectangular representations (labeled by rectangular Young diagrams R = [r s ] with r columns and s rows). This is tedious, but surprisingly straightforward. Among next challenges the first one is generalization from 4 1 to other twist and, further, double-braid knots of [32], because then one will be able to apply the double-evolution technique from [56] to deduce exclusive Racah matricesS and S -and then calculate rectangularly-colored HOMFLY for arbitrary arborescent knots. This, however, is beyond the scope of the present text, which is concentrated on 4 1 .
In sec.2 we provide a brief review [44] of the properties of differential expansions and their enhancement for defect-zero knots, like trefoil, figure eight, twist and double-braid families. In remaining sections we outline step by step the technique to build the DE for the known (from Rosso-Jones formula [57]) rectangular HOMFLY of the trefoil -known are the polynomials, but their additional structure, DE, needs to be revealed, and this is the most difficult part of the story. However, once revealed, it is very easily deformed from 3 1 to 4 1 (and, hopefully, also for other twist and double-braid knots). This deformation provides the main result of the present paperthe answer for H 41 [r s ] . In this case the 3-graded super-and hyperpolynomials, as well as the 4-graded version of the latter [31], are provided by the changes of variables [11,32,33]. We end in sec.7 with a short conclusion.
Generalities
Differential expansion (DE) from [11] for normalized knot polynomials of the figure-eight knot K = 4 1 in any symmetric representation R = [r] is: DE was further generalized to all twist knots in [32] and to all knots and even links in [33,35] and finally in [44]. In general where parameter d is an important characteristic of the knot K, called the defect of DE -from (1) we see that defect is zero for K = 4 1 . As found in [44], d + 1 is actually the degree in q ±2 of the fundamental Alexander polynomial [54,55,56] and references therein. The remarkable fact is that G k in (3) do not depend on r.
For attempts to preserve this property in generalization from symmetric to other representations see [27,46]. From factorization property of special polynomials [19,24], corrections for these representations should should also be proportional to {A}. Note also that this latter property implies that whenever G 1 depends on q, i.e. defect is greater than zero (Alexander power is greater than one), the higher G k can not vanish at A = 1 -otherwise it is impossible to preserve Al [r] (q) = Al (q r ).
For our purposes in the present paper important are the following properties of the differential expansion: • DE represents knot polynomials as polynomials of degree |R| + 1 in the differentials D n .
• Coefficients in these polynomials are functions of q and A, so it is not quite easy to give a formal definition of the expansion.
• DE is also a version of Vassiliev expansion in h for q = 1 + h and A = (1 + h) N -with this definition HOMFLY modulo a framing factor are polynomials, not series, in h -still again D n ∼ h, but the coefficients also depend on h.
• The shape of DE is partly dictated by the fact, that knot polynomial depends on representation, i.e. when representations coincide, the same is true about knot polynomials.
In this paper we mostly elaborate on the boldfaced statement in the list. This simple fact actually stands behind the "surprising" success of differential expansion method for symmetric representations R -and it remains quite powerful for arbitrary rectangular R.
Restrictions on differential expansion from group theory
We shall use the combination of three facts: More generally what imposes severe constraints on the next terms of the differential expansion. The only word of caution is that in above relations N should not be taken smaller than r or s -trivialization of representations with the number of lines l R > N implies nothing for normalized knot polynomials -what vanishes in these cases are dimensions dim R , while normalized polynomials stay non-trivial.
and, together with (8), This is in obvious accordance with (1) and, as we see, this is true for arbitrary knots K: Restrictions on the higher terms of the differential expansion come from with N > r. For example, for N = 3 Denoting the proportionality coefficients by G K 1 (A, q) and g K 2 (A, q) we get: (13) and this should be now combined with (10): Since Repeating the same reasoning for N − 4, 5, . . . , 2r − 1 we iteratively deduce that for arbitrary knot K This is the generic form of symmetric differential expansion, suggested in [44]. Transposed version for antisymmetric representations is Original expansion (1) for the figure eight knot 4 1 looks far more restrictive. Actually there are two levels of peculiarity: the coefficients G k are further factorized to with d 41 = 0, and the new coefficients F 41 k (A, q) = 1. Parameter d K was named defect of the differential expansion in [44] and it was conjectured that it is equal to the degree of the fundamental Alexander polynomial minus one (polynomial should be taken in topological framing, where it is symmetric under the change q −→ q −1 and its degree is the maximal power of q 2 , e.g. Al 41 [1] = H 41 [1] (A = 1, q) = 1 − {q} 2 = −q 2 + 3 − q −2 has degree one and defect zero). For polynomials of defect zero the first coefficient G 1 does not depend on q -such are all the twist knots, as well as a slightly more general two-parametric two-bridge family called double-braid in [32], which needs to be studied for extracting rectangular Racah matricesS.
In the case of defect-zero knots one can say that the differential expansion is actually not just in the differentials D n = {Aq n }, but in quadratic differentials i.e.
and one of the conjectures in the present paper is that this property -dependence on differentials through quadratic Z
Group theory restrictions for rectangular diagrams
As we already know from (6), for rectangular diagrams R = [r s ] the first term of differential expansion is especially simple: Further, from (7) with N = r + 1 and N = s + 1 we get: from which we deduce that with the same G K 1 (A, q) as in (16). A much simpler corollary of (22) is that simply the r + 2-th and all further terms of the differential expansion are divisible by {A/q s+1 } and s + 2-th and further -by {Aq r+1 }. This simply follows from the assumption??? that H [r] and H [1 s ] contain respectively r + 1 and s + 1 different powers of the differentials. To this one can add similar statements for higher N -and this already provides somewhat powerful restrictions, which are further enhanced for defect-zero knots by the conjecture of Z [r s ] -dependence.
For R = [22] we get in this way: We see that in general group theory restrictions leave undetermined just two differential structures and three coefficients, while in the case of defect zero the differential structures are almost fixed. Indeed, the transposition symmetry of the diagram [22] requires the sets of superscripts ? in the two undetermined terms to be symmetric. Since the last term has combinatorial multiplicity (binomial coefficient) one, the only choice is ? = 0. In the middle term the most natural choice would be ? = ±1, so that It remains to determineF 2 and q ←→ q −1 symmetricF 3 andF 4 . One can hope that they are made from F 2,3,4 , describing the first four symmetric representations (actually, in the case of [22] the substitutions q −→ q 0 , q −1 can be sufficient).
Trefoil in rectangular representations
Trefoil 3 1 is a torus knot, therefore its HOMFLY is known in arbitrary representation from the Rosso-Jones formula [57] and colored hyperpolynomials -from its straightforward generalization [18,19,20]. Since rectangular representations do not suffer from the multiplicity problem, superpolynomials for them presumably coincide with hyperpolynomials. Moreover, there is a straightforward 4-graded generalization [31,33].
What is important for our purposes, trefoil is the only torus knot with defect zero, thus it provides unvaluable information for generalizations of the simplest type (20) of differential expansion to non-(anti)symmetric representations R. In this paper we use it to find the knot-dependent coefficients in (24) and its more complicated analogues for K = 3 1 . After that we conjecture how they are modified for 4 1 (this is easy). In the future one can attempt generalizations to other twist and, finally, double-braid knots -what can be a far less reliable speculation. Still the risk would pay for it -from these conjectures one will deduce exclusive Racah matrices, calculate colored HOMFLY for arbitrary arborescent knots, and make new checks, involving arborescent torus knots: two-strand, 8 19 and ???.
Representation
This is the case, where all arborescent knots were already exhaustively analyzed in [56], based on a rigorous calculation of [55] for inclusive Racah matrices. We now reproduce (some of) these results by the differential expansion method.
+Z (1) On the other hand, this should be an expansion of the true Rosso-Jones answer, and simple adjustment allows to substitute question signs by the full-fledged formula: 2 Z (−1) [22] Underlined are the elements, prescribed by the group theory constraints (24). Erasing all the coefficients F 31 what is the right answer, derived in [55].
Representation R = [rr] = [r 2 ]
For this we need to guess general formulas for the coefficients. This actually requires additional insights from Rosso-Jones answers for higher r -non of them can actually be handled by itself, but alltogether they provide sufficient information for an educated guesswork. Once the result emerges, it looks obviously true: From these expressions it is clear, that contributing to H 31 [rr] are the Z (r) -independent terms in the following pyramid, i.e. lying over the r-th sub-diagonal: . . .
Because of the two-step edges at the right-hand side the number of such terms is always finite. Direct sum sign ⊕ stands for omitted factors, made from quantum numbers and powers of q. They are explicit in exact formula: Once again, the answer for the trefoil is known from the Rosso-Jones formula -the goal of above manipulations was to convert it to the differential-expansion form, where transition to the figure eight case is straightforward. From this formula we get (conjecturally):
Representation R = [444] = [4 3 ]
Similarly, Though now factorizations are even less restrictive, they are "split" and in result the related constraints appear more frequently, thus facilitating adjustment of the coefficients. The outcome is [444] Z [444] Z (0) [444] Z [444] Z (−2) [444] Z [444] Z [444] + [444] Z (0) [444] Z [444] Z [444] Z (−2) [444] Z [444] Z Like in the previous examples, clearly seen is the symmetry between the coefficients in the A 2p and A 2(|R|−p) terms, typical for binomial-like expansions. The powers of q 2 are just the sums of indices i for all Z-factors Z (i) [444] in the products. Note also additional powers of Z-factors, which are not directly predicted by the group-theory restrictions (45).
List of examples
It can be convenient to have a collection of the simplest answers brought together. To preserve maximum of information we give them for the trefoil 3 1 , in case of 4 1 one just omits powers of (−A 2 ) and q 2 .
The structure of Z-factors
As already stated, we assume that for defect-zero knots K (0) , i.e. those with the fundamental Alexander of degree one, Al K (0) = α + β(q 2 + q −2 ), the rectangular colored HOMFLY depend only on the shifted Z-factors [r s ] = {Aq r+i }{Aq i−s } (among other things this implies [11] a simple conjecture for the superpolynomials, because Z-factors, in variance of individual differentials, are easily made positive after T -deformation). The first question is which of these Z-factors actually contribute. [r s ] . We conjecture that nothing more actually shows up.
A more extended conjecture includes the following theses: • Chain has no gaps, it obligatory includes i = 0 and is restricted as stated above, i.e. −(s − 1) ≥ i L ≤ 0 and 0 ≤ i R ≤ r − 1 • The chains form "floors", and each floor is shorter at least by two, so that there are no two-step edges in the pyramid, see (47) below • The number of floors can not exceed min(r, s) From these rules it follows that . . .
what is indeed true in numerous tested examples. In pictorial form where shown in the boxes are the shifts {i}, each item with a set I inside the boxes stands for the product i∈I Z (i) [r s ] with some yet unspecified q-dependent coefficients. This pictorial expansion does not depend on r and s -but actually contributing are only the items with all entries i within the range −s < i < r. Clearly, it follows that the number of floors does not exceed min(r, s).
Coefficients
We now need to substitute direct-sum symbols in (47) by concrete q-dependent coefficients, which depend on r and s, in particular, explicitly respect the selection rules, conjectured in the previous subsection.
The structure of the formula is already clear from above examples: For three floors the weight factor is a product of two-floor factors: and in general Putting everything together we obtain for the differential expansion in the case of trefoil: We tested this formula up to R = [8 3 ], R = [6 4 ] and R = [5 5 ].
Conclusion
In this paper we made a very plausible conjecture for explicit formulas for rectangularly-colored HOMFLY polynomials for the figure-eight knot 4 1 . Further conjecture for the corresponding superpolynomials and 4graded hyperpolynomials of [31] should follow, according to [11] and [33].
Conjecture is made on the basis of study of differential expansions, which are especially simple for defectzero knots and, moreover, are nearly identical for 4 1 and for the trefoil 3 1 . Arbitrarily-colored HOMFLY are known for trefoil (as well as for any other torus knots) from the Rosso-Jones formula [57,19], thus the only non-trivial exercise is to convert it into a differential expansion form. This is indeed quite a tedious job, and it is described in the present paper. The result is eq.(42) for the R = [rr] and eq.(55) for generic rectangular R = [r s ]. It directly generalizes the archetypic expression of [11] for symmetric R = [r] and antisymmetric R = [1 r ] representations.
Further generalizations are needed in three directions: • to non-rectangular diagrams • to other knots with defect [44] zero • to all knots Each of these directions faces immediate difficulties. Hopefully, they will be resolved in the near future. | 4,445 | 2016-05-31T00:00:00.000 | [
"Mathematics"
] |
Determination of Diffraction Loss over Isolated Doubled Edged Hill Using the ITU-R P.526-13 Method for Rounded Edge Diffraction
In this paper, Recommendation ITU-R P.526-13 rounded edge diffraction loss method is used to determine the diffraction loss over a double edged hilltop in the path of 6 GHz C-band microwave signal. The computation is based on the path profile with path length of 6188.665 m. The path profile has maximum elevation of 412.75 m and it occurred at a distance of 2877.3 m from the transmitter. The line of sight clearance height is 35.393521m and occultation distance is 532.203m. The diffraction loss computed for the double edged hilltop using the Recommendation ITU-R P.526-13 model is 42.563065 dB.
Introduction
In wireless communication system, as signal propagates along the path from transmitter to the receiver, it experiences reduction in signal strength which is generally referred to as path loss [1][2][3][4][5]. The path loss may include propagation losses caused by the natural expansion of the radio wave front in free space, absorption losses, as well as diffraction losses when part of the radio wave front is obstructed by an opaque obstacle [6][7][8][9][10][11][12]. In other to estimate the diffraction loss caused by isolated obstacles like hills, mountains, buildings, such isolated obstacles are modeled as single knife edge obstructions [13][14][15]. However, in reality, the obstruction presents more diffraction loss than the single knife edge approximation. In that case, rounded edge diffraction loss approximation may be applied to such isolated obstacles.
Over the years, several methods for determination of rounded edge diffraction loss have been developed. One of the popular approaches is a method to determine the excess diffraction loss above the knife edge diffraction loss. The access diffraction loss can be computed according to Hacking method [1], [17]. Wait method is another method for computing the access diffraction loss in addition to the knife edge approximation [18], [19]. However, in this paper, the method presented by the International Telecommunication Union (ITU) for computing diffraction loss over single rounded obstacle is used to compute the diffraction loss over double edged hilltop [20].
In all the methods of computing diffraction loss over rounded edge, the basic approach is to fit a rounded edge to the vertex of the obstruction and then use the radius of curvature of the rounded edge to compute the diffraction loss [21], [22]. In most cases, hilly obstructions do not have single edges top. Rather more than one edges that are close together may be presented. In this case, the single rounded edge that will be fitted to vertex of the hill will have to enclose all the adjacent edges on the hilltop. In this case, the radius of curvature of the rounded edge may be well above what the value should have been if the rounded edge was to enclose only one edge on the hilltop. The study in this paper considers a situation where there are two adjacent edges on the hilltop to which a single rounded edge will be fitted and then used to determine the diffraction loss based on the ITU-R P.526-13 method for rounded edge diffraction [20].
The ITU-R P.526-13 Method for Diffraction over Single Rounded Edge
The diffraction loss for single rounded obstacle according to Recommendation ITU-R P.526-13 is given as follows [20]: where: J (ν) is the Fresnel-Kirchhoff loss due to an equivalent knife-edge placed with its peak at the vertex point. According to ITU -R 526, t he knife edge diffraction loss, J (ν) is given as; J ν 6.9 20Log 0.1 1 0.1 (2) v is the diffraction parameter. The diffraction parameter ν is where ʎ is the signal wavelength which is given as; ʎ ) * f is the frequency in Hz and c is the speed of light which is 3x10 + m/s. h is the line of sight (LOS) clearance which is obtained from the path profile and obstruction geometry, as shown in figure 2.
,is distance from the transmitter to the point where the LOS clearance is measured, as shown in figure 2.
, is distance from the receiver to the point where the LOS clearance is measured, as shown in figure 2. h and λ are in meters, and d1 and d2 are in kilometres. Where, The radius, I of the circle fitted in the vicinity of the double edged hill vertex can be given the expression (Seybold, 2005, Barué,200); where D is the occultation distance and it is obtained from the graph plot of the path profile and geometry of the obstruction, as shown in figure 2. Particularly, a line (referred here as tangent 1) is drawn from the transmitter to be tangential to the path profile at the vicinity of the hill apex. Let the tangent point of tangent 1 with the path profile be denoted as T1. Again, another line (referred here as tangent 2) is drawn from the receiver to be tangential to the path profile at the vicinity of the hill apex. Let the tangent point of tangent 1 with the path profile be denoted as T2. Then, D is the distance between T1 and T2. The point at which the tangent 1 and tangent 2 intersect above the hill vertex, as shown in figure 2. becomes the knife edge point. The LOS clearance is the height from the line of sight to the point of intersection of tangent 1and tangent 2. Let β be the angle the LOS makes with the horizontal where; where H S is the height of the transmitter and H T is the height of the receiver and d is the distance between the transmitter and the receiver. The values of d, H S and H T are obtained from the path profile data.
Let αbe the angle (in radian) between the LOS and tangent 1 and let α be the angle (in radian) between the LOS and tangent 2 then, α is the external angle (in radian) between tangent 1 and tangent 2 at their point of intersection above the hill vertex, where The angles αand α are obtain by cosine rule as follows; Similarly,
Results and Discussions
The study if conducted for the L-band microwave frequency which ranges from 1 GHz to 2 GHz. Specifically, the 1 GHz and 1.9 GHz frequencies are considered in this paper. The elevation profile data used for the study is given in Table 1. From Table 1 the maximum elevation is 412.75 m and it occurred at a distance of 2877.3m from the transmitter. Table 2 shows diffraction loss and associated parameters for the double edged hilltop using the Recommendation ITU-R P.526-13 model. From Table 2, the path length (d) is 6188.665 m. Also, the tangent from the transmitter and the tangent from the receiver intersected at a distance of 3086.6596m from the transmitter and a distance of 3102.0054 m from the receiver. The line of sight makes an angle of 0.0032335 radians with the horizontal. The LOS clearance height is 35.393521m. The occultation distance is 532.203m. The diffraction loss computed for the double edged hilltop using the Recommendation ITU-R P.526-13 model is 42.563065 dB.
Conclusions
Recommendation ITU-R P.526-13 model for rounded edge diffraction loss method is presented. The method is used to determine the diffraction loss over a double edged hilltop in the path of 6 GHz C-band microwave signal. The computation is based on the path profile of a cases study with double edged hilltop. | 1,729 | 2017-06-12T00:00:00.000 | [
"Physics"
] |
Phylogenomics Reshuffles the Eukaryotic Supergroups
Background Resolving the phylogenetic relationships between eukaryotes is an ongoing challenge of evolutionary biology. In recent years, the accumulation of molecular data led to a new evolutionary understanding, in which all eukaryotic diversity has been classified into five or six supergroups. Yet, the composition of these large assemblages and their relationships remain controversial. Methodology/Principle Findings Here, we report the sequencing of expressed sequence tags (ESTs) for two species belonging to the supergroup Rhizaria and present the analysis of a unique dataset combining 29908 amino acid positions and an extensive taxa sampling made of 49 mainly unicellular species representative of all supergroups. Our results show a very robust relationship between Rhizaria and two main clades of the supergroup chromalveolates: stramenopiles and alveolates. We confirm the existence of consistent affinities between assemblages that were thought to belong to different supergroups of eukaryotes, thus not sharing a close evolutionary history. Conclusions This well supported phylogeny has important consequences for our understanding of the evolutionary history of eukaryotes. In particular, it questions a single red algal origin of the chlorophyll-c containing plastids among the chromalveolates. We propose the abbreviated name ‘SAR’ (Stramenopiles+Alveolates+Rhizaria) to accommodate this new super assemblage of eukaryotes, which comprises the largest diversity of unicellular eukaryotes.
INTRODUCTION
A well resolved phylogenetic tree describing the relationships among all organisms is one of the most important challenges of modern evolutionary biology. A current hypothesis for the tree of eukaryotes proposes that all diversity can be classified into five or six putative very large assemblages, the so-called 'supergroups' (reviewed in [1] and [2]). These comprise the 'Opisthokonta' and 'Amoeboza' (often united in the 'Unikonts'), 'Archaeplastida' or 'Plantae', 'Excavata', Chromalveolata', and 'Rhizaria'. The supergroup concept as a whole, however, has been shown to be only moderately supported [3] and the evolutionary links among these groups are yet to be confirmed. These uncertainties may be due to the limited amounts of available data for the most parts of the eukaryotic diversity. In particular, only a small fraction of the unicellular eukaryote diversity [4] has been subject to molecular studies, leading to important imbalances in phylogenies and preventing researchers to reliably infer deep evolutionary relationships.
Using a phylogenomic approach we recently confirmed the monophyly of Rhizaria and addressed the question of its evolutionary history [19]. The analyses of 85 concatenated nuclear protein sequences led to two potential affiliations with other eukaryotes. According to the first hypothesis, Rhizaria was sister group to an excavate clade defined by G. lamblia, T. vaginalis, and Euglenozoa. The second hypothesis suggested that Rhizaria are closely related to stramenopiles, which form together with alveolates, haptophytes, and cryptophytes the supergroup of chromalveolates. Besides our study, the branching pattern between Rhizaria and other supergroups has been specifically evaluated only by Hackett et al. (2007), who reported a robust relationship between Rhizaria and members of the chromalveolates.
Here, we further address the phylogenetic position of Rhizaria within the eukaryotic tree using an extensive multigene approach. For this purpose, we have carried out two expressed sequence tag (EST) surveys of rhizarian species: an undetermined foraminiferan species belonging to the genus Quinqueloculina (574 unique sequences, Accession Numbers: EV435154-EV435825) and Gymnophrys cometa (628 unique sequences, Accession Numbers: EV434532-EV435153) (Cienkowski, 1876), a freshwater protist that has been shown to be part of core Cercozoa [20]. Using novel EST datasets for two rhizarians [21,22] and data from publicly available protists (TBestDB; http://tbestdb.bcm.umontreal.ca/ searches/login.php), we constructed a taxonomically broad dataset of 123 protein alignments amounting to nearly 30000 unambig-uously aligned amino acid positions. Our superalignment includes several representatives for all described eukaryotic supergroups. Our results show an unambiguous relationship between Rhizaria and stramenopiles, confirming the hypothesis we had previously proposed and suggesting the emergence of a new super assemblage of eukaryotes that we propose to name 'SAR' (stramenopiles+al-veolates+Rhizaria).
Single-gene analyses and concatenation
49 eukaryotic species representatives of all five current supergroups for which large amounts of data are available were selected. We identified 123 genes (see Table S1) that fulfilled the following criteria: 1) at least one of the four rhizarian species as well as at least one member of unikonts, plants, excavates, alveolates, and stramenopiles were present in every single-gene alignment; 2) the orthology in every gene was unambiguous on the base of singlegenes bootstrapped maximum likelihood (ML) trees. This second criterion is particularly important in multigene analyses in order to avoid the mixture of distant paralogs in concatenated alignments, because it would dilute the true phylogenetic signal by opposing strong mis-signals, thus preventing the recovering of deep relationships [23]. Similarly, it is essential to detect and discard putative candidates for endosymbiotic gene transfer (EGT) or Horizontal Gene Transfer (HGT). Hence, we submitted each of our single-gene alignments to ML reconstructions with bootstrap replications and systematically removed sequences that displayed ambiguous phylogenetic positions for both paralogy and gene transfers. For example, we found few cases where B. natans and G. theta sequences actually corresponded to genes encoded in the nucleomorph genome of these species. This restrictive procedure allowed us to have a set of 123 single-gene alignments, each of them containing at least one rhizarian species, with only orthologous sequences, and virtually no gene transferred either from a plastid or from a foreign source.
One possible approach to analyse such a dataset is to build a supermatrix that is formed by the concatenation of individual genes (for a review see [23]). After concatenation, our final alignment contained 29908 unambiguously aligned amino acid positions. Overall, we observed an average missing data of 39% but these sites were not uniformly distributed across taxa (see Tables S2 and S3 for more details). However, several studies have demonstrated that the phylogenetic power of a dataset remains as long as a large number of positions are still present in the analysis [24][25][26][27]. For example, Wiens [26,27] demonstrated that the inclusion of highly incomplete taxa (with up to 90% missing data) in model-based phylogenies, such as likelihood or Bayesian analysis, could cause dramatic increases in accuracy.
Phylogenetic position of Rhizaria
The ML and Bayesian trees inferred from the complete alignment ( Figure 1; see also Figure S1 and S2) recover a number of groups observed previously and are in most aspects congruent with global eukaryotic phylogenies published recently [14,28,29]. A monophyletic group uniting Metazoa, Fungi, and Amoebozoa (altogether the unikonts) was robustly supported (100% bootstrap support, BP; 1.0 Bayesian posterior probability, BiPP); green plants, glaucophytes, and rhodophytes came together, albeit only weakly supported (56% BP; this node was not recovered in the Bayesian analysis, see Figure S2); a group composed of haptophytes and cryptophytes, as well as excavates (without Malawimonas that failed to consistently branch with the other excavates species) received only moderate supports for their union in the ML inference (68% and 61% BP, respectively) but 1.0 BiPP. Finally, alveolates, stramenopiles, and Rhizaria all formed monophyletic groups with 100% BP and 1.0 BiPP. Although most of the recognized eukaryotic supergroups are recovered in our analyses, the relationships among them are generally not well resolved. This is with two notable exceptions: the union of the unikonts and, much more interestingly, the strongly supported (BP = 100%; BiPP = 1.0) assemblage of stramenopiles, Rhizaria, and alveolates (clade SAR), with these last two groups being robustly clustered together (BP = 88%; BiPP = 1.0) (clade SR). Comparisons of substitution rates between the different lineages were highly non significant at 1.25%, indicating that all species evolve at very similar rates, thus rendering unlikely a possible artefact caused by long branches (data not shown).
To further test this unexpected nested position of Rhizaria between alveolates and stramenopiles, we compared different topologies by performing the approximately unbiased (AU) test, which is considered as the least-biased and most rigorous test available to date [30]. More precisely we evaluated two questions: 1) Are Rhizaria indeed monophyletic with stramenopiles and alveolates; 2) Are Rhizaria specifically related to stramenopiles, with the exclusion of alveolates? Our analyses show that an alternative topology, which corresponded to the best topology with Rhizaria forced not to share a common ancestor with the assemblage composed of stramenopiles and alveolates ( Figure S3; Table 1B), had a likelihood significantly lower than the best ML tree obtained without constraint ( Figure 1; Table 1A) at the significance level of 0.05 (P = 4e-008). On the other hand, the two other possible positions for Rhizaria within the SAR grouping (Table 1D, E) could not be significantly rejected (P = 0.112; P = 0.079, respectively), thus preventing the exclusion of a specific relationship between Rhizaria and alveolates or an early divergence of Rhizaria. In addition, we also tested the relationship between Rhizaria and excavates by evaluating all possible trees in which these two groups are monophyletic. None of these trees could be retained in the pool of plausible candidates (data not shown).
DISCUSSION
We present in this study the largest dataset currently available for eukaryote phylogeny combining both an extensive taxa sampling and a large amount of amino acid positions. Our analyses of this unique dataset bring a strong evidence for the assemblage of Rhizaria, stramenopiles and alveolates. Therefore we propose to label this monophyletic clade 'SAR'. Although weakly suggested in our previous multigene analysis [19], we show here using a much larger dataset that this specific grouping is in fact very robust. We confirm the existence of consistent affinities between assemblages that were thought to belong to different supergroups of eukaryotes, thus not sharing a close evolutionary history. The addition of about 20 relevant taxa of unicellular eukaryotes as well as more than 30 genes (to a total of 123 genes) seems to have stabilized the topology to consistently display a monophyly of SAR. Within this newly emerged assemblage, Rhizaria appear to be more closely related to stramenopiles than to alveolates, but topology comparisons failed to discard alternative possibilities (i.e. R(SA) or S(RA)). In addition, we clearly reject the putative relationship between Rhizaria and excavates [16,19], which has been already convincingly tested in [31].
Interestingly, an association between Rhizaria and stramenopiles could already be observed in 18S rRNA trees representing a very large diversity of eukaryotes (see for example [32][33][34]). More recently, the analysis of 16 protein sequences from 46 taxa also showed a robust clade consisting of Rhizaria, alveolates, and stramenopiles [29]. However, this work significantly differs from ours by rejecting the association of Rhizaria as sister to stramenopiles or as sister to all chromalveolates. Beside our much larger dataset, it is unclear why our data display more flexibility with respect to the position of Rhizaria within the SAR monophyletic clade. More comprehensive taxa sampling for both Rhizaria and stramenopiles, particularly for early diverging species (e.g. radiolarians), is likely to shed light on the internal order of divergence within SAR.
These new relationships suggest that the supergroup 'Chromalveolata', as originally defined [16], does not correctly explain the evolutionary history of organisms bearing plastids derived from a red algae. In fact, our results confirm the lack of support chromalveolates as a whole (i.e. including haptophytes and cryptophytes) received in several studies [3]. The phylogenetic position within the eukaryotic tree of the monophyletic group haptophytes+cryptophytes is uncertain [13]. Globally, chromalveolates have been strongly supported by phylogenies of plastid genes and unique gene replacements in these taxa [35][36][37], but the monophyly of all its members has never been robustly recovered with nuclear loci, even using more than 18000 amino acids (Patron et al. 2007). Overall, the unresolved nodes between the chromalveolates lineages have prevented clear conclusions relative to this model of evolution [3,15]. The emergence of SAR may potentially complicate the situation of secondary endosymbioses and questions the most parsimonious explanation of the evolution of chlorophyll-c containing plastids (see also [19,29,38,39]). At this stage at least two scenarios are conceivable, but none of them can be presently favoured by concurrent topologies due to the uncertain position of the haptophytes and cryptophytes clade. First, a single engulfment of red algae might have occurred in a very early stage of chromalveolates evolution and the resulting plastid was secondarily lost in certain lineages, such as ciliates and Rhizaria. Second, it is possible that stramenopiles (or alveolates, or even haptophyte-s+cryptophytes, depending on their real position within the tree) have acquired their secondary plastid in an independent endosymbiosis event from a red algal organism. If this latter scenario is correct, minimizing the number of endosymbiosis events as proposed by the chromalveolates hypothesis might actually not correspond to the true symbiogenesis history. So far, as many as 11 primary, secondary, and tertiary symbiotic events have been identified (see [12]). Notably, two independent secondary endosymbiosis events involving green algae have been recognized in members of excavates and Rhizaria: Euglenozoa and chlorarachniophytes [31], respectively. Hence, multiplying the number of secondary endosymbiosis might better explain the phylogenetic relationships within eukaryotes than the chromalveolate hypothesis.
The new SAR supergroup implies that the major part of protists diversity shares a common ancestor. Indeed, the chromalveolates members alone already accounted for about half of the recognized species of protists and algae [40]. With the addition of rhizarians, a huge variety of organisms with very different ecology and morphology are now united within a single monophyletic clade. Finding a synapomorphy that would endorse the unification of these groups will be the next most challenging step in the establishment of eukaryote phylogeny.
Sampling, culture and construction of cDNA libraries
The miliolids of genus Quinqueloculina were collected in the locality called Le Boucanet, near La Grande Motte (Camargue, France). They were sorted, picked, and cleaned by hand under the dissecting microscope. The culture of G. cometa was taken from the culture collection of IBIW RAS (Russia) and maintained as described in [20]. Cells were collected by low-speed centrifugation, resuspended into five volumes of TriReagent (Invitrogen, Carlsbad, Calif.), and broken using manual pestles and adapted microtubes. Total RNA and cDNA were prepared as in [21]. EST sequencing of the Quinqueloculina sp library was performed with the ABI-PRISM Big Dye Terminator Cycle Sequencing Kit and analysed with an ABI-3100 DNA Sequencer (Perkin-Elmer Inc., Wellesley, Mass.), all according to the manufacturer's instructions. The G. cometa library was sequenced by Agencourt Bioscience Corporation (Beverly, Mass.).
Construction of the alignments
We performed TblastN searches against GenBank using as queries a rhizarian dataset made of all translated sequences (translations done with transeq, available at the University of Oslo Bioportal; http://www.bioportal.uio.no) for R. filosa, Quinqueloculina sp., G. cometa, and B. natans. We retrieved and translated all sequences with an e-value cutoff at 10 240 , accounting for 46 new genes out of a total of 126. The rest of the genes (i.e. 80 genes) corresponded to rhizarian proteins putatively homologous to sequences previously used to infer large-scale phylogenies [41] and available at http:// megasun.bch.umontreal.ca/Software/scafos/scafos_download. html. In order to roughly check for orthology, we also added to these alignments the human sequence with the lowest e-value in our TblastN output to make sure that no closer homologs were known. These 126 genes were used to build a very well-sampled dataset by adding all available relevant species. For this purpose, we considered all species in TBestDB as well as all other bikont taxa for which sufficient sequence data were available and made a local database against which we ran TblastN searches with our rhizarian dataset (e-value threshold 10 240 ).
To decide on the final set of genes used in this study, we carefully tested the orthology for each of the 126 selected genes by carrying out Maximum likelihood (ML) analyses including bootstrap supports with the program TREEFINDER (JTT, 4 gamma categories and 100 bootstrap replications) [42]. For three genes, the overall orthology could not be assessed with enough confidence and thus were removed. More generally, taxa displaying suspicious phylogenetic position were removed from the single-gene dataset.
Once this pre-screen was complete, our final taxon sampling comprises 49 species and 123 genes (Table S1). We concatenated all single gene alignments into a supermatrix alignment using Scafos [43]. Because of the limited data for certain groups and to maximize the number of genes by taxonomic assemblage, some lineages were represented by different closely related species always belonging to the same genus (for details see Tables S2 and S3).
Phylogenomic analyses
The concatenated alignment was first analyzed using the maximum likelihood (ML) framework encoded in TREEFINDER, with the global tree searching procedure (10 starting trees) [42]. In order to double-check our topologies, we also ran RAxML (RAxML-VI-HPC-2.2.3) [44], using randomized maximum parsimony (MP) starting trees in multiple inferences and the rapid hillclimbing algorithm. Following the Akaike Information Criterion (AIC) [45] computed with ProtTest 1.3 [46], the RtREV+G+F model allowing between-site rate variation was chosen (calculations were done with 6 gamma categories). The WAG model was also tested and gave the same topologies. To estimate the robustness of the phylogenetic inference, we used the bootstrap method [47] with 100 pseudoreplicates in all analyses.
Bayesian analysis using the WAG+G+F model (4 gamma categories) was preformed with the parallel version of MrBayes 3.1.2 [48]. The inference, starting from a random tree and using four Metropolis-coupled Markov Chain Monte Carlo (MCMCMC), consisted of 1,000,000 generations with sampling A, B) Comparison between topology A (best tree, corresponding to the Figure 1) and the alternative topology B (corresponding to the best tree when Rhizaria are forced not to be monophyletic with S and A, Figure S3). C, D, E) Comparisons between topology C (best tree) and the alternative topologies D and E. Abbreviations are as follows: A = alveolates; S = stramenopiles; R = Rhizaria Underlined number corresponds to the significant P value of the rejected topology. every 100 generations. The average standard deviation of split frequencies was used to assess the convergence of the two runs. Bayesian posterior probabilities were calculated from the majority rule consensus of the tree sampled after the initial burnin period as determined by checking the convergence of likelihood values across MCMCMC generations (corresponding to 50,000 generations, depending on the analysis). The evolutionary rates of the selected species were calculated with the relative-rate test as implemented in RRTree [49], by doing pairwise comparisons of two ingroups belonging to either SAR, hatptophytes+cryptophytes, excavates or plants relatively to the unikonts taken as outgroup.
Tree topology tests
To better assess the phylogenetic position of Rhizaria, we conducted topology comparisons using the approximately unbiased (AU) test [30]. For each tested tree, site likelihoods were calculated using CODEML and the AU test was performed using CONSEL [50] with default scaling and replicate values. To test the monophyly of the new assemblage SAR, we first compared our tree ( Figure 1) to the best possible tree in which Rhizaria were forced to be outside SAR, given topological constraints corresponding to a trichotomy of unikonts, stramenopiles+alveolates, and the rest of the groups represented as a multifurcation ( Figure S3). Secondly, we evaluated the placement of Rhizaria within the SAR clade by testing the three possible branching patterns between Rhizaria, stramenopiles, and alveolates. Figure S1 Best RAxML tree of eukaryotes.Numbers at nodes represent the result of the bootstrap analysis; black dots mean values of 100% (hundred bootstrap replicates were done). Nodes with support under 65% were collapsed. | 4,438 | 2007-08-29T00:00:00.000 | [
"Biology"
] |
Non-Isothermal Crystallization Kinetics of Poly(Ethylene Glycol)–Poly(l-Lactide) Diblock Copolymer and Poly(Ethylene Glycol) Homopolymer via Fast-Scan Chip-Calorimeter
The non-isothermal crystallization kinetics of double-crystallizable poly(ethylene glycol)–poly(l-lactide) diblock copolymer (PEG-PLLA) and poly(ethylene glycol) homopolymer (PEG) were studied using the fast cooling rate provided by a Fast-Scan Chip-Calorimeter (FSC). The experimental data were analyzed by the Ozawa method and the Kissinger equation. Additionally, the total crystallization rate was represented by crystallization half time t1/2. The Ozawa method is a perfect success because secondary crystallization is inhibited by using fast cooling rate. The first crystallized PLLA block provides nucleation sites for the crystallization of PEG block and thus promotes the crystallization of the PEG block, which can be regarded as heterogeneous nucleation to a certain extent, while the method of the PEG block and PLLA block crystallized together corresponds to a one-dimensional growth, which reflects that there is a certain separation between the crystallization regions of the PLLA block and PEG block. Although crystallization of the PLLA block provides heterogeneous nucleation conditions for PEG block to a certain extent, it does not shorten the time of the whole crystallization process because of the complexity of the whole crystallization process including nucleation and growth.
Introduction
Poly(ethylene glycol)-poly(L-lactide) diblock copolymer (PEG-PLLA) is widely used in the medical field due to its biodegradability, blood compatibility and drug permeability [1]. In recent years, a lot of literature has reported the application of PEG-PLLA with different structures in the field of medicine, and pointed out its disadvantages such as high crystallinity, slow degradation rate and being insoluble in water [2][3][4]. In fact, the crystal structure of PEG-PLLA will affect its thermal, mechanical and other physical properties, thus determining the application performance [5,6]. Understanding and controlling the crystallization behavior have provided an effective method for predicting and adjusting the physical properties of crystalline polymers. In addition, just as the additive nanoparticles have an influence on the crystallization of nanocomposite polymers, the crystallization of different blocks in the double-crystallizable block copolymer PEG-PLLA is bound to influence each element, which is worth exploring [7][8][9].
In practice, semi-crystalline polymer materials are usually non-isothermal crystallization during the processing and cooling process, which largely determines the ultimate properties and practical application value of the materials. Therefore, it is of great significance to study the crystallization process of polymers under non-isothermal conditions for the technological optimization and preparation of high-performance polymer materials. Yang, J.L. et al. [10] and Kong, X.H. et al. [11] studied the non-isothermal crystallization process of PEG-b-PLLA and PET-b-PEO (poly(ethylene terephthalate)-poly(ethylene oxide) diblock copolymer) by conventional differential scanning calorimetry (DSC) as a conventional means to study the polymer crystallization kinetics. However, the fastest cooling rate of conventional DSC usually cannot inhibit the crystallization of samples. It is difficult to avoid crystal nucleation and growth during cooling process, which affects the nucleation behavior of polymer crystals in low temperature regions [12][13][14][15]. The actual cooling rate in machining is often greater than the maximum cooling rate that can be achieved by conventional DSC, while it can be achieved by Fast-Scan Chip-Calorimetry (FSC) because of extremely fast heating and cooling rates. Although FSC is widely used in homopolymer, such as isotatic polypropylene (iPP) [16], polyetheretherketone (PEEK) [17], poly(ε-caprolactone) (PCL) [18], etc., few people apply it to copolymers, let alone doublecrystallizable block copolymers, because of the complexity of the crystallization process. Due to the complex crystallization behavior of double-crystallizable block copolymers, the immaturity of the non-isothermal crystallization kinetics theory and the lack of precise and appropriate instruments and methods, there are few studies on the non-isothermal crystallization kinetics of PEG-PLLA. However, the crystallizing temperature difference between PEG block and PLLA block in PEG-PLLA is large, which can make PLLA crystallize in a large range while PEG is in the molten state [19]. This feature provides a good starting point for us to study its non-isothermal crystallization kinetics. Therefore, we innovatively proposed a new method to explore the non-isothermal crystallization kinetics of the double-crystallizable block copolymer PEG-PLLA via FSC, which is expected to be applied in a wider field.
In this paper, two cooling methods are used to cool rapidly PEG-PLLA by FSC, and the Ozawa method [20] and Kissinger equation [21] are applied to PEG-PLLA diblock copolymer to explore its non-isothermal crystallization kinetics and compare them with those of the PEG homopolymer.
Materials
Crystallization experiments were performed with PEG-PLLA and PEG, which were provided by the Shenzhen Meiluo Technology Co., Ltd. (Guangdong, China). PEG has a weight-average molecular weight of M w = 5000 g/mol. The weight-average molecular weight of the PEG block and PLLA block in the PEG-PLLA are 5000 g/mol and 2000 g/mol, respectively, and the weight percentages for each block are 71.42% and 28.57%, respectively.
Test Instrument
A chip-calorimeter instrument, Flash DSC 2+ (made by Mettler-Toledo, Zurich, Switzerland) equipped with a Huber TC-100 intracooler was employed. The maximum heating rate and the maximum cooling rate can reach 50,000 K/s and 40,000 K/s, respectively. FSC has been widely used due to its advantages such as ultra-fast heating and cooling scanning rate, outstanding temperature control ability and accurate time resolution. It can inhibit the crystallization nucleation of the polymer sample during the process of heating and cooling, and avoid the influence on the subsequent crystallization kinetics test. The results enable us to obtain the initial information about the structure of the aggregated states inside the polymer crystals, to produce a new understanding of the annealing and melting behavior of polymer crystals, and to deepen understanding of the nucleation mechanism of polymer crystals. Schick, C. et al. [22][23][24] have suggested that heterogeneous nucleation can be effectively bypassed and homogeneous nucleation can be observed in the bulk at low temperatures, using the fast cooling rate provided by FSC. Based on the proposal of Schick, C. et al., this study assumes that this hypothesis is valid; therefore, the crystallization kinetics of diblock copolymer can be deduced.
Methods
The sample was prepared under an optical microscope and then transferred onto the cross-marked area of the sample cell. A pre-melting operation was adopted to ensure good thermal contact between the sample and sensor before applying the preset temperature program. In this step, the sample was heated to 180 • C and then cooled to room temperature; both heating and cooling were at the rate scale of 1 K s −1 . In order to provide comparable data obtained at the scanning rates of different orders of magnitude, each heat flow rate was normalized with the relative scanning rate, and the heat flow measured by Flash DSC 2+ was transformed into apparent heat capacity. In our measurements, the same samples were used on the same sensor, and the results shared the same baseline. In addition, since different cooling rates employed the same sample and the results were independent of sample mass, this parameter was not essential in the present analysis. The data treatment was performed by STAR e software (Version 10.0).
FSC can provide an extremely fast cooling rate that inhibits any ordered process, i.e., crystallization. The rate at which cooling crystallization is precisely inhibited is called the critical cooling rate. Thus, the crystallization of the PLLA block can be inhibited when the cooling rate is higher than the critical rate to 60 • C [14], so that the PLLA block crystallized together with the PEG block in the process from 60 • C to −80 • C. Therefore, we used two different cooling methods to study PEG-PLLA (methods 1 and 2) and used method 1 for PEG. Method 1: cool directly from 180 • C to −80 • C, as shown in Figure 1a; Method 2: cool from 180 • C to 60 • C, cooling to −80 • C, respectively, by more than the critical cooling rate (2000 K/s) and cooling rate for studying, as shown in Figure 1b. The heating rate is 50 K/s, isothermal at 180 • C for 1 s to erase the thermal history.
Analysis of Non-Isothermal Crystallization Curves
Generally, the crystallinity of polymers is relatively low, so it is not convenient to describe the crystallization process, so a more reasonable relative scale was adopted. The relative crystallinity X C was characterized by the heat enthalpy method [25]: where T 0 , T c and T ∞ are the initial, final, ultimate crystallization temperatures, respectively, and H is the crystallization enthalpy.
The results of the non-isothermal crystallization behavior of PEG and PEG-PLLA are shown in Figure 2. At relatively low cooling rates, Figure 2a only has one crystalliza-tion peak around −25 • C, while Figure 2b has two crystallization peaks around 75 • C and −25 • C, respectively. The obvious crystallization peak near 75 • C in Figure 2b was attributed to the crystallization peak of the PLLA block [16]. Thus, it can be concluded that the PEG-PLLA copolymer is a double-crystallizable block copolymer, and the crystallization temperature of the two blocks is a big difference about 105 • C. In Figure 2b, the crystallization peak near 75 • C is very flat at 100 K/s; however, it becomes more and more convex with the increase in cooling rate, until it disappears at 2000 K/s because crystallization is inhibited. In Figure 2c, crystallization in the PLLA block is inhibited between 180 • C and 60 • C due to the ultra-fast cooling rate (2000 K/s). Therefore, there are no crystallization peaks in this temperature range. With the increase in cooling rate, the crystallization peaks of the PEG block in PEG-PLLA and PEG tend to flatten during the crystallization process, which is due to spherulite impingement and crowding [26]. The crystallization peaks above all become wider and shift to the lower temperature region with the increase in cooling rate; that is, the faster the cooling rate is, the later the crystallization time is. This is because the cooling rate is too fast for the polymer to form a crystal within the limited time. Moreover, it can be known that the critical cooling rates of (a), (b) and (c) are 1000-1200 K/s, 1200-1600 K/s and 1000-1200 K/s, respectively.
The starting temperature (T s ), temperature at maximum heat flow (T max ), and corresponding relative crystallinity X max under different cooling rates of PEG and PEG-PLLA are listed in Table 1. For one thing, at each cooling rate, T s and T max are the maximum of b and the minimum of a, and both decrease with the increase in cooling rate. Compared with PEG, the crystallization peaks of the PEG block in the PEG-PLLA move towards the higher temperature region known from increase in T s and T max . Compared with method 1, the crystallization peaks of the PEG block in the PEG-PLLA using method 2 move towards the higher temperature region known from increase in Ts and T max . On the other hand, at the same cooling rate, X max in (b) is higher than that in (a). It can be understood that the PEG block in PEG-PLLA has better crystallizability because the crystallized PLLA block promotes the ability of the PEG molecular chain to diffuse to the crystallization site. Additionally, the average peak relative crystallinity of (b) is greater than that of (c), which means that the crystallization of PEG block on the basis of block PLLA crystallization is easier than that of the PEG block and PLLA block crystallized together; that is, the first crystallized PLLA block promotes the crystallization of the later PEG block. The above quantitative data indicate that the non-isothermal crystallization kinetics of PEG and the PEG block in PEG-PLLA are different, and the two cooling methods have different effects on the crystallization of the PEG block. When the amorphous phase is cooled to a low temperature, the PLLA block in (b) crystallizes first and grows freely from the homogeneous melt to form a specific morphology, which must have an effect on the crystallization of the PEG block later. Table 1. Characteristic parameters of PEG and PEG-PLLA during the non-isothermal crystallization process: (a) PEG in method 1, (b) PEG-PLLA in method 1 and (c) PEG-PLLA in method 2.
Analysis by the Ozawa Equation
Non-isothermal crystallization kinetics were analyzed by the Ozawa method [20]. Ozawa assumed that the non-isothermal crystallization process could be composed of infinitesimal isothermal crystallization steps and extended the Avrami equation [27] to the non-isothermal conditions. The Ozawa equation for calculating the relative crystallinity at temperature T as follows: where K(T) is a cooling function of the process, a is the cooling rate, and q is the Ozawa exponent that relies on the crystal growth and nucleation mechanism, corresponding to one-dimensional growth, q = 2; two-dimensional growth, q = 3; three-dimensional growth, q = 4. Being expressed in a double logarithmic form, Equation (2) can be further rewritten as In the actual measurement, the relative crystallinity was selected at a given temperature T and different cooling rates from the non-isothermal crystallization curves. According to Equation (3) Table 2.
From Figure 3, it is evident that the Ozawa plot shows a series of straight lines. From Table 2, coefficients of determination R 2 all are greater than 0.99. The results show that the Ozawa method is successful for PEG and PEG block in PEG-PLLA using methods 1 and 2.
In fact, for many copolymers, homopolymers, and blinks, Ozawa theory has been shown to be inadequate to describe their non-isothermal crystallization kinetics [25,[28][29][30]. They argued that the Ozawa method ignored secondary crystallization, resulting from spherulite impingement and restricted effects, and other reasons, the quasi-isothermal nature of the treatment. In general, primary crystallization and secondary crystallization are two kinds of crystallization processes of crystalline polymers. Ozawa believed that the effect of secondary crystallization could be negligible and it occurred in the later stage in the cooling process. However, there are many factors affecting secondary crystallization, such as cooling rate, which cannot be ignored for some polymer systems. Therefore, this factor should be taken into account when the Ozawa equation is used to analyze the kinetics of non-isothermal crystallization. The researchers mentioned above all used a relatively low cooling rate in the cooling crystallization process (the fastest is generally 40 • C/min), in which case secondary crystallization occurs very easily and cannot be ignored. As shown in Figure 3, the degree of the data point fitting is very good when we use the fast cooling rate to study, which confirms the secondary crystallization is inhibited under this condition. According to Table 2, q is equal to 1.829 in (a) and is equal to 2.491 in (b), it shows that PEG and the PEG block in PEG-PLLA during the crystallization process differed by about one dimension under the same cooling method, and further illustrates that the first crystallized PLLA block have an effect of the crystallization behavior on the later crystallized PEG block. Specifically, the crystal of the PLLA block provides nucleating sites for the crystallization of the PEG block and can be regarded as heterogeneous nucleation to a certain extent [7,22]. The difference of q between (b) and (c) means that inhibition of crystallization in the PLLA block has a significant effect on subsequent crystallization. q in (c) equals 2.057, between 1.829 and 2.491, which shows that the method of the PEG block and PLLA block crystallized together is intermediate between crystallized PEG and the PEG block and PLLA block crystallized in turn. According to the Ozawa method, q equals 2.057 means that the method of the PEG block and PLLA block crystallized together corresponds to one-dimensional growth. Additionally, this reflects from the side that between the crystallization region of the PLLA block and PEG block there exists a certain separation.
The relationship between lnK(T) and temperature T is shown in Figure 4. The fitting results coefficients of determination R 2 of the three groups of data (a), (b) and (c) are all greater than 0.92. K(T) is the cooling function, and it only varies as a function of temperature. For a given temperature, it depends on the nucleation process of the crystalline entities (homogeneous or heterogeneous), on growth pattern and on growth rate; therefore, different crystallization mechanism has certain influence on it. lnK(T) is supposed to be a linear function of the temperature. Like the results obtained by other researchers [31,32], we have the same result that lnK(T) is a linear function of temperature. However, it must be considered that the values of K(T), in non-isothermal crystallization, do not have the same physical significance as in isothermal crystallization, because the temperature is changing constantly under non-isothermal conditions [10]. This affects the rate of nuclear formation and crystal growth because they are temperature dependent.
Crystallization Activation Energy (∆E)
Considering the change of peak temperature T max with cooling rate, Kissinger derived the Kissinger equation from the Arrhenius equation, with the following form [21]: where R is the gas constant (8.314 J/(mol·K)), ∆E is the activation energy. According to the results in Table 1, the curves of ln(a/T 2 max ) and 1/T max are drawn, as shown in Figure 5. The crystallization activation energy can be obtained from the slope, i.e., ∆E = −R × slope. The Kissinger plot gave a series of parallel lines with a slope and an intercept of function.
The fitting results of the Kissinger equation are listed in Table 3. It can be seen from Table 3 that the determination coefficients R 2 of the fitting results are all greater than 0.99. Thus, it is evident that the Kissinger equation does adequately describe the non-isothermal crystallization kinetics of PEG and PEG-PLLA. The crystallization activation energies of (a), (b) and (c) are −62.52, −59.05 and −60.71 kJ/mol, respectively. Negative values only indicated that the crystallization process is an exothermic reaction [33], which is consistent with Figure 2, while the absolute value of activation energy represented crystallization capacity, which is (a), (c) and (b) in order. The activation energy of (b) and (c) are smaller than that of (a); that is, the crystallization ability of PEG-PLLA is stronger than that of PEG, which is put down to the influence of PLLA block. Because the activation energy of (b) is less than that of (c), the crystallization ability of PEG block and PLLA block crystallized in sequence is stronger than that of the PEG block and PLLA block crystallized together, which is attributed to the fact that the first crystallized PLLA block promoted the crystallization of PEG block, which is consistent with the results discussed above. In addition, Yang, J.L. et al. [7] measured the absolute value of activation energy of PEG and PEG-PLLA by ordinary DSC, which were greater than 150 kJ/mol, which were far greater than our measured values. This implies that the crystallization capacity of PEG and PEG-PLLA is stronger at fast cooling rates than at slow cooling rates. It can be explained by the fact that crystallization occurred at very low temperatures at fast cooling rates. Then, at lower temperatures, the nucleation rate goes up.
Analysis of Crystallization Rate
The bulk crystallization process of macromolecules is phenomenologically divided into two steps: nucleation and growth. The crystallization process of polymer materials, especially the nucleation induction period, is often very long, and crystallization half time, which the relative degree of crystallinity reaches 50%, is usually used to reflect the total crystallization rate [18,25]. In addition to the activation energy of the Kissinger equation, which reflected the crystallization capacity, we adopted crystallization half time to represent the total crystallization rate, and that is used to describe the crystallization difficulty in the whole process. During the non-isothermal crystallization process, the relation between crystallization half time t 1/2 and the corresponding temperature T 1/2 is where T 1/2 is the temperature at half crystallization, T 0 is the initial crystallization temperature, and a is the cooling rate. Based on Equation (5), we can calculate crystallization half time at different cooling rates as shown in Table 4. For visual comparison, their variation with cooling rates is shown in Figure 6. It can be seen that the higher the cooling rate, the shorter the time to complete half of the crystallization; that is, the faster the total crystallization rate. In addition, obviously, it can be seen from these data that the crystallization process has a strong dependence on the cooling rate. With the increase in cooling rate, the crystallization half time becomes closer. When the cooling rate is 400 K/s, the relative crystallinity of 50% can be reached in almost the same time. For PEG and the PEG block in PEG-PLLA under the same cooling method, the crystallization half time of PEG is always smaller than that of PEG in PEG-PLLA, which indicates that the total crystallization rate of PEG in PEG-PLLA is slower. In combination with the above discussion, although crystallization of the PLLA block provides heterogeneous nucleation conditions for the PEG block to a certain extent, it does not shorten the time of the whole crystallization process, because the whole crystallization process includes nucleation and growth, which also reflects the complexity of the crystallization process. Moreover, for the two cooling methods of PEG-PLLA, the crystallization half time of method 1 is always greater than that of method 2. It means that when the PLLA block crystallizes first and the PEG block crystallizes later, the PEG block crystallizes more slowly than when PEG and PLLA crystallize together. This results from the confined crystallization effect of the PLLA block [10].
Conclusions
The non-isothermal crystallization kinetics of double-crystallizable copolymer PEG-PLLA was investigated and compared with that of PEG homopolymers based on the large crystallization temperature difference between the PEG block and PLLA block in PEG-PLLA and the fast cooling rate provided by FSC. The crystallization process is highly dependent on the cooling rate, and with the increase in the cooling rate, the non-isothermal recrystallization curves tend to flatten during the crystallization process and the crystal-lization peaks become wider and shift to the lower temperature region. When using rapid cooling rates, the Ozawa theory is a perfect success because secondary crystallization is inhibited. According to Ozawa method, the crystallization of PLLA block provides nucleating sites for the crystallization of PEG block and can be regarded as heterogeneous nucleation to some extent, while the method of the PEG block and PLLA block crystallized together corresponds to a one-dimensional growth, which reflects that there is a certain separation between the crystallization regions of the PEG block and PLLA block. The relative crystallinity X max corresponding to the highest peak in the crystallization curves and the crystallization activation energy calculated by the Kissinger equation indicates that the first crystallized PLLA block promotes the crystallization of the PEG block. Crystallization half time indicates at the same time that, although crystallization of the PLLA block provides heterogeneous nucleation conditions for the PEG block to a certain extent, it does not shorten the time of the whole crystallization process because of the complexity of the whole crystallization process including nucleation and growth. In short, due to the fast cooling rate of FSC and the new cooling method, we obtained some different results from previous studies and new understanding about PEG-PLLA. | 5,374.6 | 2021-04-01T00:00:00.000 | [
"Materials Science"
] |
How Should Global Fund Use Value-for-Money Information to Sustain its Investments in Graduating Countries?
It has been debated whether the Global Fund (GF), which is supporting the implementation of programs on the prevention and control of HIV/AIDS, tuberculosis (TB) and malaria, should consider the value-for-money (VFM) for programs/interventions that they are supporting. In this paper, we critically analyze the uses of economic information for GF programs, not only to ensure accountability to their donors but also to support country governments in continuing investment in cost-effective interventions initiated by the GF despite the discontinuation of financial support after graduation. We demonstrate that VFM is not a static property of interventions and may depend on program start-up cost, economies of scales, the improvement of effectiveness and efficiency of providers once the program develops, and acceptance and adherence of the target population. Interventions that are cost-ineffective in the beginning may become cost-effective in later stages. We consider recent GF commitments towards value for money and recommend that the GF supports interventions with proven cost-effectiveness from program initiation as well as interventions that may be cost-effective afterwards. Thus, the GF and country governments should establish mechanisms to monitor cost-effectiveness of interventions invested over time.
Introduction
The Global Fund (GF) is one of the biggest worldwide health development initiatives focusing on bringing the end of HIV/ AIDS, tuberculosis (TB), and malaria. Since its establishment in 2002, the Fund has relied on financial contributions from both public and private sectors. 1 In 2015, the GF had received in total US$33 billion in cumulative and fully paid contributions. A pledging conference, during which donors and funders commit to assist the goals of the GF, was held in 2016. This is a timely paper that addresses a long debate in global communities on whether the GF, which implements vertical programs in three major diseases, should consider the value-for-money (VFM) of their investments on which level of governance and how this can be operationalized. The case study of Thailand is presented given the authors firsthand experience in Thailand. Moreover, Thailand is now graduating from GF support and there are discussions on the sustainability of the GF investment. The authors reviewed relevant documents, both published and grey literature, as well as using direct experience of conducting health technology assessments (HTAs) [1] and health system and policy research to support decision-making in Thailand. We conclude by proposing a conceptual framework that can be used to guide the GF and country recipients in incorporating VFM information for implementing and monitoring the GF programs even well after graduation or transition from support.
Thailand and the Global Fund
Thailand is an upper-middle income country with a population of 67 million and an average income per capita of $5561 in 2014. 2 Since 2002, Thais have enjoyed universal health coverage (UHC) funded through public sources. Seventy-five percent of the total health expenditure in the country, amounting to 513 billion baht or equivalent to $16 billion, is government funded. 3 Although Thailand is facing the burden of non-communicable diseases (NCDs) due to its aging society, it still has challenges to overcome, namely communicable diseases, especially HIV/AIDS, TB, and malaria. HIV dropped from the top disease burden among Thais in 1999 with 1.3 million disability-adjusted life years (DALYs) lost, to the fifth place in 2009 with 0.44 million DALYs lost. On the other hand, TB is the 17th, with DALYs lost dropping from 0.16 in 1999 to 0.14 million in 2009. Malaria is not a common disease for most Thai communities, except in the border areas. The estimated burden of malaria in 2009 is 0.003 million DALYs lost and is ranked 67th in terms of disease burden. 4 The UHC benefit package has included TB and malaria treatments since its inception, whereas antiretroviral treatment for HIV was only included in 2005 after the scaling up of treatment efforts from 2002. 5 The spending for HIV/AIDS accounted for 2.4% of total health expenditure in Thailand or at approximately $330 million in 2011, of which around 80% come from public funding and 20% from others, predominantly the GF. 6 For TB, the total expenditure is approximately $30 million annually, with the government spending around $20 million. For malaria, the total spending is $9 million, which is mostly from the GF, with the government spending less than $1 million on efforts for the disease. 7 As the only major overseas development partner who is funding healthcare in Thailand, the GF cumulatively contributed $540 million of grants for prevention and control of HIV/AIDS, TB and malaria since 2003. Because Thailand's economy is consistently growing, the country is graduating from GF support in 2016 for HIV and TB and in 2017 for malaria. Although the percentage of financial support for HIV and TB from GF is relatively small compared to the budget (less than 10%) that the Thai government currently invests for the national HIV and TB programs, 8 the target organizations for the GF grants are typically different from those which receive government investment. GF grants target non-state actors (NSAs) and/or community organizations to provide outreach services to populations that are overlooked by or are difficult to reach for the government programs. This is also due to the illegal status of most at-risk populations such as injecting drug users (IDU), female sex workers (FSW), and migrant workers. The formal Thai healthcare sector provides a primarily supportive role for NSAs to assist the GF program for HIV and TB, such as trainings or laboratory support. Because of its emphasis on high priority diseases and conditions by disease burden, UHC may disregard important disease burdens in small populations. However, UHC also focuses on infectious diseases that may begin with minority populations which is justifiable to minimize the possibility of epidemic outbreaks. For malaria, both the informal and formal sectors under various NSAs and the government benefit from the GF grant. The GF grant has been used for training of health professionals and finding cases in formal sectors; meanwhile, the informal sector receives the grant for communication and engagement in broader areas. Thailand is considered a role model for having achieved UHC for its constituents as well as using evidence and HTA [1] to inform its health benefit package for medicines, vaccines, and non-pharmacological interventions. [9][10][11][12] Despite this, HTA has rarely been used to inform decisions made by Thailand's Country Coordinating Mechanism (CCM). The CCM consists of multiple stakeholders representing local partners and acts as a governing body of GF programs in the country. Its status as a separate entity may have prevented the integration of the benefits package development mechanism into the CCM's processes.
The Value-for Money of Global Fund Programs: A Silver Lining
In 2012, the Thai CCM requested the Health Intervention and Technology Assessment Program (HITAP) [2] to conduct a mid-term review and evaluate the VFM of the GF's program for HIV prevention targeting most-at-risk populations, namely, men that have sex with men (MSM), IDU, FSW, and migrant workers. 13 While HTA [1] within the Thai health system has not been used specifically for infectious diseases within the population as a whole, its methods can still be used to understand the VFM of interventions for a targeted population. The results showed that the cost per infection averted is high and beyond the ceiling threshold (which at the time was 120 000 Thai baht per healthy life year gained), with the lowest cost for the IDU program at approximately 300 000 baht per HIV case averted and the highest for the MSM program at 11 million baht per case averted [3] . Three factors can explain the inefficiency of this program: firstly, targeting hard-to-reach marginal groups resulted in a higher cost; secondly, the focus on assisting non-formal health sector and community organizations, both of which do not have adequate infrastructure, in providing the services required significant capital investment and human resource training; and, thirdly, the program, having been implemented for only 2 years at the time of evaluation, had not yet reached its full potential due perhaps to lack of awareness and trust amongst the client or target population. These three factors and the resulting inefficiency may be common in settings other than Thailand given that CCM managed GF programs separately from the country government; however, this does not mean that CCM should only support cost-effective interventions in their inception.
An ex-ante evaluation should be performed before or during grant application and the ex-post evaluation routinely during the grant implementation. 14 This is to ensure that the CCM can classify the interventions into categories shown in Figure. There are four possible types of interventions that are available for HIV, TB, and malaria. Intervention A shows good VFM from program commencement and in the long-term. Interventions B and C are cost-ineffective in the beginning but show improved VFM over time. The difference between interventions B and C is that the former can become cost-effective over a short period and even perhaps before a country's graduation from GF support. Intervention D, on the other hand, is cost-ineffective from the beginning and even over time.
In theory, country governments should fund interventions like intervention A without GF support; nevertheless, if there is insufficient health budget, then the GF should prioritize these types of interventions. It is difficult, however, to determine the differences between interventions B, C, and D in the beginning phase of their implementation. Investing in B, C, and D should be done carefully, ie, implementing those as pilots so that they can be easily discontinued if necessary before they are scaled up. Accordingly, GF and CCM should continuously include cost-effectiveness analysis in their monitoring and evaluation mechanisms to determine the VFM of these interventions (not just in terms of staying within budget, as is currently outlined in their previous funding model) 15 and thus eventually detect and terminate the highcost intervention D. As for interventions B and C, given that intervention B is well within the means of the government to continue implementation before or at the end of GF support, GF should promote prioritization of intervention B and countries' corresponding ownership and responsibility for the program. Intervention C, which can eventually become cost-effective for the government post-graduation, can be considered as an optional investment for countries and a third priority. The VFM can be monitored using both surrogate and final outcome. Observing the final outcome at the very beginning of program implementation is not often straightforward. For example, for a condom program, it is often not possible to monitor the population that uses a condom when having sex. As such, there is evidence of higher percentage of the population using condoms and the consequent reduction in HIV rate. However, at the end of the program, it will be necessary to measure the impact on HIV infections per se. The following recommendations are offered to the GF, country governments and their CCMs to ensure the VFM for GF investment as well as the smooth transition from the GF grant once the countries become more self-sufficient.
Policy Recommendations: To the Global Fund
• Continuing with ending one-size-fits-all governance approach for CCM -the GF needs to improve the understanding of the country context when working with them. With the New Funding Model in 2014-2016 and 2017-2019, the GF is moving in the right direction in increasing country dialogue and working with country governments' national strategies. 15,16 However, countries may have different health governance arrangements, and GF needs to keep certain principles and apply them based on the context. One example is the principle behind the CCM of having a multi-stakeholder process in designing and implementing GF programs. This is already available in the Thai context given that the National Health Security Office (NHSO) is responsible for treatment for HIV, TB, and malaria, and the prevention of HIV and TB already have multiple stakeholders on the NHSO board. If the GF works with the NHSO board, Thailand no longer requires a CCM and the benefit of using the existing governance structure is that it ensures country ownership, coordination, and integration of health policy, effective implementation and long-term sustainability. • Making the best use of performance-based financing (PBF) 17 -to avoid the perception of grants being a 'windfall, ' the GF should emphasize PBF payments and ensure that the country is aware that if they are not performing well and keeping to the target, the grant disbursement will be deducted from the next round.
With the GF's introduction of the PBF, however, it needs to be careful in using the approach properly by avoiding punishing grant recipients who are working within their means but cannot reach their target, dubbed "poor performers, " because of other difficulties, eg, political conflict or war, disasters, other difficulties due to poor socio-economic status or target population, or factors beyond their control. • Establishing an HTA [1] mechanism for VFM framework -The GF needs to establish mechanisms and tools for the GF itself that will help CCM in countries to recognize whether the interventions submitted in the concept note or grant application are of category A, B, C, or D through ex-ante assessment, and be used for ex-post M&E of VFM of the GF program. In doing this, the GF may need to develop a technical manual, such as a reference case for economic evaluation and some standards of evidence to guide critical appraisal, 18 and teams to assist countries for VFM assessments. The GF needs to compile data and information from grant applications and M&E processes that can in the future be used to guide GF and non-GF countries about the value of investment in HIV, TB and malaria interventions. In this case, GF or its assigned partner could act as an archivist and databank. • Incorporating social and ethical dimensions in HTA for GF -to ensure the success of ending HIV, TB and malaria, the GF and country partners need to understand not only the health and economic impact of their investment but also the impact on equity of access to the GF programs.
Understanding the social determinants of health on HIV, TB and malaria as well as their impact on programs targeting these diseases will be crucial to ensure that the most difficult-to-reach and at-risk populations can benefit from the program. As with the case in GF's programs in Thailand, ending these diseases mean that even minority and outreach populations must be able to access these programs, and not just the majority of the affected population. 19,20 Methods such as extended costeffectiveness analysis or distributional cost-effectiveness analysis exist which can and have been applied to depict the distributional impact of allocation decisions and to help policy-makers make more equitable decisions. 21,22 • The GF should use HTA for selection of health technologies and price negotiation for its central procurement, which can then assist other countries making procurements outside of the headquarters. This process will help drive VFM of the GF and avoid controversies in terms of questionable investments. Using VFM evidence for procurement of health technologies has advantages over price negotiations without using VFM evidence, which is the current practice in GF, due to its comparative advantage of bulk purchasing. Using VFM evidence for pricing means incentivizing industry to deliver good value (quite distinct from cheap or unprofitable) innovation. This also helps GF avoid the trap of investing in low-cost technologies that may have minimal impact or in engaging in a cost minimisation exercise, exerting its large purchasing power which countries are less able to do after transitioning and which may bring about complaints of stifling innovation. Our recommendations are in line with donor performance agreements such as the UK's Department for International Development (DfID) 23 and with global funding conduits' own VFM frameworks and strategies and are increasingly informing key performance indicators for such institutions. Indeed, the GF's VFM and sustainability special initiative, often together with other stakeholders such as the World Bank, has encouraged the use of disease-specific models for optimising resource allocation such as in the case of Sudan where the HIV model Optima was used to inform the national strategic plan for HIV resulting in significant modelled improved health outcomes. 24 Moreover, the GF Market Shaping Strategy makes an even clearer case for adopting cost-effectiveness analysis to inform product selection at central and country level, the latter through proactively funding in country cost-effectiveness analyses for its grantees: "The GF…will proactively engage with recipients to share relevant analyses and information about likely product costs and comparative health technology assessments…the GF Secretariat...will connect recipients with these resources to inform countrydriven health technology assessment. Engaging in this process can also be an opportunity to build country capacity for health technology assessment and how to incorporate this into product selection decisions. " 25 Policy Recommendations: To the Countries • With support from GF and other international partners, the CCM (or equivalent authority) and local technical bodies need to develop capacity in using and assessing VFM as well as social, institutional, and ethical consequences of GF investments in the context of a broad HTA [1] framework. The CCM must be able to work with local and international technical bodies that are supporting CCM and its work to continuously monitor and evaluate the VFM of their programs over time to classify interventions into A, B, C, or D. CCM has the authority to discontinue investments if they are found not to be good value for money. Moreover, the CCM and local partners need to understand the socio-economic, ethnic, and geographical factors that affect the programs' effectiveness and to incorporate these concerns in their design and implementation. Using HTA in the decisionmaking process can also address these issues. • GF is appropriately geared to assist countries until they can sufficiently support themselves. However, during this period, countries should be able to engage with the GF work through a process that accounts for their existing public health programs and local challenges.
Interventions funded through the GF should be a good value for investment, e.g. interventions A and B in Figure, for the country itself. Once the transition begins, the GF can re-invest the money saved from programs that are now transferred to the government to other important and/or targeted programs as well. In conclusion, our paper illustrates the potential of aligning GF efforts with countries' priority setting, ensuring that vertical programmes are considered alongside those under national health insurance schemes, which are broader and encompass many areas such as NCDs. It also demonstrates the utility of using economic evaluation to guide GF and CCM investment and management over time. This mechanism will ensure sustainability of cost-effective HIV, TB, and malaria interventions beyond the GF program and will ensure aid budgets, whilst they are relevant, go much further. 26
Acknowledgements
This case study is part of a larger work commissioned to the Results for Development (R4D) led by Dr. Robert Hecht and Arjun Vasan. The authors appreciate their comments and suggestions for the improvement of this case study. The HITAP is funded by the Thailand Research Fund under the senior research scholar on Health Technology Assessment [Grant RTA59800011]. The HITAP International Unit (HIU) was established with support from the Thai Health-Global Link Initiative Project (TGLIP) and the international Decision Support Initiative (iDSI) to provide technical assistance on health intervention and technology assessment for governments of low-and middle-income countries. iDSI is funded by the Bill & Melinda Gates Foundation, the UK's DfID, and the Rockefeller Foundation. This manuscript is partly supported by the iDSI [Grant OPP1087363]. The findings, interpretations, and conclusions expressed in this article do not necessarily reflect the views of the funding agencies.
Ethical issues
Not applicable.
[3] The incremental cost-effectiveness ratios (ICERs) of the program were compared between the GF program and the Ministry of Public Health program. The ICER measures the cost of one healthy life year, accounting for the total program costs. The ceiling threshold is the government's willingness-to-pay for one healthy life year. | 4,639.6 | 2017-02-27T00:00:00.000 | [
"Medicine",
"Political Science",
"Economics"
] |
Studying the Effectiveness of Polyacrylamide (PAM) Application in Hydrocarbon Reservoirs at Different Operational Conditions
: The water-soluble polymer PAM (polyacrylamide) is used in enhanced oil recovery (EOR) operations. It is pumped into water injection wells to increase the viscosity of the injected water and in turn to direct more oil towards production wells. This EOR process is proven to be sensitive to operational well conditions such as hydrocarbon reservoir temperature, as well as the salinity of the injected water and/or formation water. These operational conditions lead to technical challenges ranging from the solubility of PAM in injection water to the behaviour of PAM inside the reservoir. To gain a clear picture of the functionality of PAM in EOR applications, this report characterizes its behaviour of in terms of degree of hydrolysis and changes in solution viscosity determined using Perkin Elmer spectrum 100 Fourier transform infrared-Attenuated total reflection (FTIR)-ATR and nuclear magnetic resonance spectroscopy ( 1 H NMR) and a Fann model 35 Couette and Cole Parmer rotational viscometer, respectively. Different shear rates were investigated to determine the effect of shear on PAM gel stability. Experiments were performed for PAM mixed with formation brine at 50, 70, and 90 ◦ C for ageing times of up to 30 days. The results indicate that the degree of hydrolysis achieved after 30 days is much higher in saline solutions than in pure water, and that this effect is more pronounced at higher temperatures. For example, after 30 days at 50 ◦ C, the hydrolysis level was observed to be 53%, rising to 65% at 70 ◦ C and 75% at 90 ◦ C in PAM mixed with brines. Similar trends were observed with viscosity, where lower viscosity was observed for samples at higher temperatures and salinities. It is thus reasonable to conclude that the degree of hydrolysis causes changes in the viscosity of the polymer gel, leading to a decline in its performance as it ages.
Introduction
In oilfield reservoirs, EOR is designed to reduce the remaining residual oil after water or gas flooding to a minimum value. Conventional methods of oil production yield about one-third of the oil initially in place in a given reservoir and the remaining oil, representing almost two-thirds of the oil in place, is an attractive target for the application of EOR methods [1].
Chemical enhanced oil recovery (CEOR) is one of the main techniques of EOR and includes polymer, surfactant, or alkaline flooding. Several examples of CEOR techniques using nanofluids or liquid suspensions of nanoparticles dispersed in distilled water or brine have recently been investigated by Nwidee et al. [2]. They showed that changes in the properties of nanoparticles depend on the nature of the materials used and the mechanisms applied. The injection of nanoparticle suspensions into porous media can lead to different adsorption, desorption, transport, and blocking behavior [3,4].
suspensions into porous media can lead to different adsorption, desorption, transport, and blocking behavior [3,4]. They can intervene in fluid mobility improvement, hence enhancing reservoir fluid displacement. Among the chemical techniques that can be employed, polymer flooding applications are most commonly used for the control of fluid mobility in oilfield reservoirs [3,[5][6][7][8]. In an example of its use in China, over ten million tons of oil per year has been produced [1,[9][10][11]. Two of the most frequently applied synthetic water-soluble polymers are PAM and hydrolyzed PAM (HPAM). PAM itself is considered to be the most widely deployed technique because of its low cost and risk [12].
In polymer flooding operations, an optimal viscosity of the displacing fluid (water) is required to ensure favourable mobility, leading to better oil recovery with good polymer injectivity at a lower cost. When a PAM solution is pumped into a reservoir from an injection well, the flow velocity, which is related to shear rate, will change from the well-bore to a few hundred feet away from the bore; consequently, the viscosity of the polymer solution will change [13]. This change in the rheological properties of the polymer solution can be influenced by temperature and salinity as the polymer ages.
Degradation of the polymer occurs when the amide groups present in the PAM structure are hydrolyzed in aqueous solution. Several reports have shown that the instability of PAM as it ages occurs mostly in the presence of elevated temperature and divalent salts [14,15]. Glover [16] explained that the most common dissolved salt in reservoirs is NaCl, but many other salts such as MgCl2 and CaCl2 may be present in smaller quantities. Furthermore, Mungan [7] claimed that mixtures of monovalent (NaCl) and divalent cations cause significant reductions in PAM viscosity. Such considerations have led to the recommendation of 70-82 °C as a safe temperature range for the use of polyacrylamide in seawater and brine during polymer flooding [15][16][17][18][19][20][21]. Recently, Uranta et al. [22] conducted a correlation analysis to determine the safe maximum temperature point (SMTP) for polyacrylamide (PAM) in saline solutions, and it was concluded that the temperature range between 70-82 °C is not adequate for the system given that the SMTP for PAM changes with type of salt and its concentration in the reservoir.
According to Uranta et al. [22] and as shown in Figure 1, the amide group in the structure of PAM consists of a carbonyl group bonded to one nitrogen and two hydrogen atoms (CONH2), and the carbonyl functional group is composed of a carbon atom double-bonded to an oxygen atom (C=O). The amide group changes to carboxylic (RCOOH) through chemical transformation when dissolved in water at an elevated temperature. Additionally, this leads the amide group attached to the carbon in the PAM structure to be thermally hydrolyzed into negatively charged carboxylate (COO − ) groups. In brine solutions, the PAM is now open to ionic functionality due to the presence of monovalent (NaCl) or multivalent (MgCl2 or CaCl2, etc.) compounds, giving rise to accelerated hydrolysis, which eventually results in a reduction in the viscosity of the PAM solution. It is therefore important to evaluate and understand the behaviour of polymer solutions in different operational conditions to ensure that the polymer is not degraded as it moves through the reservoir. This study deals specifically with reservoir salinity and temperature, but other reservoir characteristics such as pressure could also impact polymer behaviour [4]. A study by Knight [23] confirmed that the mobility behaviour of polyacrylamide (PAM) when injected into the reservoir and its mobility control characteristics can best be determined in the laboratory by measuring the solution viscosity, which is then used to anticipate the behaviour of polyacrylamide flowing in the porous media of oilfield reservoirs. Thus, a laboratory study has been carried out of the rheological (viscosity) and hydrolytic changes experienced by PAM solutions at temperatures of 50, 70, and 90 • C for different degrees of salinity, where the Fann viscometer system utilized a concentric cylinder with rotor and bob radii of 1.8415 and 1.725 cm, respectively, as well as a Cole Parmer rotational viscometer with an R 2 stainless steel spindle. The experimental results have been used to elucidate the time-dependent effects on the thermal stability of the thixotropic and pseudo-plastic behaviour of the PAM gel.
Polymer Solution Preparation
In this study, a high molecular weight (5-6 × 10 6 Dalton) non-ionic water-soluble polymer of polyacrylamide (PAM) was selected and sourced from Sigma-Aldrich (St. Louis, MO, USA). Two types of polymer solution were prepared as follows: type 1 using deionized water; and type 2 synthesized formation water (hereafter called brine) mimicking the Draugen reservoir, North Sea, with total dissolved salts of 43,280 ppm. The reservoir is of sandstone formation [16], and Table 1 presents the composition of the synthesized brine. Polymer solutions with or without brine were prepared using a 1% (w/v) polymer (10 g) concentration in a 1000 mL beaker mixed with an electric stirrer. To prevent polymer degradation due to the presence of oxygen, 1% (w/v) or 10 g of sodium thiosulphate (Na 2 S 2 O 3 ,) concentration was added to 1000 mL of the solution as an oxygen scavenger.
Measurement of Extent of Hydrolysis of PAM
The degree of hydrolysis gives a measure of the degradation of the polymer gel. The degree of hydrolysis is defined as the actual number of carboxylate (COO − ) groups that could replace the amide group (CONH 2 ) divided by the total number of amide and carboxylate groups [24,25]. A two-stage process was used to determine the degree of hydrolysis achieved in the aged polymer samples. In stage 1, the polymer samples collected at time zero were analysed using nuclear magnetic resonance spectroscopy ( 1 H NMR) to set the initial degree of hydrolysis. Then, 20 mg of the PAM solution was dissolved in 1 mL of deuterium oxide (D 2 O) in a small vial container, which was then placed on a hot block at three different temperatures of 50, 70, and 90 • C for over 3 h. The mixed solution after ageing was then transferred from the vial container to an NMR tube. The NMR tube was inserted into the NMR for scanning. The data for initial degree of hydrolysis were analysed in terms of the identification of the peak area assigned to the functional group and further processed used Bruker Topspin 3.5 software.
Then, to speed up the analysis in stage 2, the Fourier transform infrared (FTIR) analytical technique was used instead to measure the change in degree of hydrolysis after ageing. Each aged PAM solution was cast into a watch glass and allowed to dry before being placed on a Perkin Elmer spectrum 100 FTIR-ATR spectrometer sensor for recording. In Perkin Elmer spectrum 100 FTIR, there is an attached diamond ATR crystal material, where the samples are placed and an arm is screwed down onto the sample for enhancing perfect contact to the diamond phase. Then, the spectrum will begin to appear on the computer screen, continue screwing until a good spectrum is obtained. The resolution of FTIR spectra recorded wave numbers ranging from 650-4000 cm −1 with 32 average scans. The absolute measurement provided by the 1 H NMR for the time zero samples was used as a calibration point for the change in absorbance measurements obtained from the FTIR in order to derive the absolute degree of hydrolysis of the aged polymer samples.
Degree of hydrolysis of PAM in Thermally Aged Samples
The rate of hydrolysis of amide groups has been found to be the primary mechanism behind polyacrylamide (PAM) degradation [19]. Later interactions between the hydrolyzed polyacrylamide and saline solutions containing divalent ions could cause significant losses in solution viscosity. However, the rate of hydrolysis was found to depend mostly on temperature.
FTIR Measurements to Determine the Change in Degree of Hydrolysis in the Aged Samples
Previous studies [24][25][26] have proposed that data on the proportion of amide (CONH 2 ) groups converted to carboxylate (COO − ) could be used to determine the degree of hydrolysis (DH). FTIR can be used to monitor the change in absorbance associated with the stretching of the secondary amide (CONH 2 ) C=O to give a measure of the change in degree of hydrolysis of the amide group.
Emission or absorption spectra arise when PAM molecules undergo transitions between quantum states corresponding to internal energy levels. In infrared spectroscopy, the presence of different functional groups results in the absorption of energy at specific wavelengths and its re-transmission results in the formation of peaks. Table 2 shows the assignment of the FTIR characterization of the bands and spectra of PAM.
Primary amide NH 2 symmetric stretching occurred at 3190-3170 cm −1 and the asymmetric stretching band of NH 2 appeared at 3340-3332 cm −1 . A combination of these two peaks was used to determine the transmittance of the amide group. Secondary amide N-H stretching was observed at 3300-3250 cm −1 with a corresponding secondary amide II overtone. The primary amide C=O stretching (CONH 2 ) was assigned in a shift range of 1680-1630 cm −1 . The vibrational modes of amide groups may be affected as a result of hydrogen bonding. Therefore, the secondary amide C=O stretching (CONH 2 ) was assigned at a point between 1630-1603 cm −1 .
Four significant peaks for carboxylate groups were observed at 1330-1600 cm −1 as a result of the stretching of acrylate. The vibration on the band shifted at C-O stretching at around 1200-1300 cm −1 ; the vibrations also existed in plane bending around 1000-1300 cm −1 and out-of-plane bending at To be of use, the value of transmittance is transformed into absorbance using Equation (1) [27]. The amide group content of the hydrolysed polyacrylamide was determined from the intensities of absorbance in the amide group (CONH2). To be of use, the value of transmittance is transformed into absorbance using Equation (1) [27]. The amide group content of the hydrolysed polyacrylamide was determined from the intensities of absorbance in the amide group (CONH 2 ).
These results are presented in Figure 3a,b. The percentage increase in amide absorbance is equal to the difference between the initial amide absorbance at day zero (A0) and each amide absorbance at the designated ageing times (A1, 2, 4…30) over the initial amide absorbance expressed as a percentage, as shown in Equation (2): Figure 4a-c present the percentage change in absorbance of amide groups (CONH2) against ageing time in days at 50, 70, and 90 °C. As shown in these figures, the percentage of amides at 50, 70, and 90 °C increases from 18 to 30%, from 33 to 57%, and from 48 to 57% at the different temperatures. The percentage increase in amide absorbance is equal to the difference between the initial amide absorbance at day zero (A 0 ) and each amide absorbance at the designated ageing times (A 1, 2, 4 . . . 30 ) over the initial amide absorbance expressed as a percentage, as shown in Equation (2): Figure 4a-c present the percentage change in absorbance of amide groups (CONH 2 ) against ageing time in days at 50, 70, and 90 • C. As shown in these figures, the percentage of amides at 50, 70, and 90 • C increases from 18 to 30%, from 33 to 57%, and from 48 to 57% at the different temperatures. These results prove that the percentage change in the absorbance of amide groups present in PAM increases as temperature increases, indicating that temperature is the major driving force in the degradation of PAM. In the presence of brine, the amide group in PAM is further hydrolysed, where at the same temperature conditions as in the absence of brine, the percentage of amide groups increased from 25 to 61%, from 38 to 75%, and from 62 to 88%, respectively, at the three different temperatures. Therefore, the interaction of PAM with brine at 50, 70, and 90 • C shows greater percentage increases in amide groups in the solutions compared with that when brine is not present.
NMR Measurements on Time Zero Samples
The rate of hydrolysis of amide groups was found to be primary mechanism behind PAM degradation. In order to determine the degree of hydrolysis, Figure 5 illustrates the 1 H NMR spectra for PAM dissolved in pure water and brine. These results prove that the percentage change in the absorbance of amide groups present in PAM increases as temperature increases, indicating that temperature is the major driving force in the degradation of PAM. In the presence of brine, the amide group in PAM is further hydrolysed, where at the same temperature conditions as in the absence of brine, the percentage of amide groups increased from 25 to 61%, from 38 to 75%, and from 62 to 88%, respectively, at the three different temperatures. Therefore, the interaction of PAM with brine at 50, 70, and 90 °C shows greater percentage increases in amide groups in the solutions compared with that when brine is not present.
NMR Measurements on Time Zero Samples
The rate of hydrolysis of amide groups was found to be primary mechanism behind PAM degradation. In order to determine the degree of hydrolysis, Figure 5 illustrates the 1 H NMR spectra for PAM dissolved in pure water and brine. In these two scans, the first peak positioned at 4.8 ppm represents water content in the deuterium oxide solvent. The peaks measured at 2.20-2.40 and 1.60-1.80 ppm represent the amide (CONH 2 ) Energies 2018, 11, 2201 9 of 17 and carboxylate (COO − ) groups, respectively [26]. The determination of degree of hydrolysis was performed by processing the calibration axis and integration of the peak area using Bruker Topspin 3.5 software.
The integral values indicate the total amount of hydrogen (H) atoms within the molecular structure, and this helps in identifying the functional group, such as CH, CH 2 , and CH 3 . The degree of hydrolysis (DH) was calculated using Equation (3).
where n a is the position of the amide groups (CONH 2 ) on the peak shift as assigned by the hydrogen atom in CH, and m a is the position of the carboxylate group (COO − ) in the peak shift as assigned by the hydrogen atom in CH.
The degrees of hydrolysis of the initial samples for PAM dissolved in both pure water and brine are recorded in Table 3. Table 3. Initial degree of hydrolysis (DH i ) for pure water and brine samples.
For the initial samples at time zero, it is clear that the degree of hydrolysis rises with temperature for both sets of samples. The degree of hydrolysis for the brine dataset is systematically higher than that for the samples prepared in pure water.
To convert the percentage absorbance change into degree of hydrolysis, the percentage increases in amide groups and the initial degree of hydrolysis (DH) are combined and the final equation is expressed in Equation (4).
where DH t is the degree of hydrolysis at each designated time, DH i is the initial degree of hydrolysis at day zero from the 1 H NMR analysis, and %CONH 2 is the percentage change in absorbance. Figure 6a,b show the final degree of hydrolysis of PAM plotted as a function of ageing time in pure water and brine, respectively. In these two scans, the first peak positioned at 4.8 ppm represents water content in the deuterium oxide solvent. The peaks measured at 2.20-2.40 and 1.60-1.80 ppm represent the amide (CONH2) and carboxylate (COO − ) groups, respectively [26]. The determination of degree of hydrolysis was performed by processing the calibration axis and integration of the peak area using Bruker Topspin 3.5 software.
The integral values indicate the total amount of hydrogen (H) atoms within the molecular structure, and this helps in identifying the functional group, such as CH, CH2, and CH3. The degree of hydrolysis (DH) was calculated using Equation (3).
where na is the position of the amide groups (CONH2) on the peak shift as assigned by the hydrogen atom in CH, and ma is the position of the carboxylate group (COO − ) in the peak shift as assigned by the hydrogen atom in CH.
The degrees of hydrolysis of the initial samples for PAM dissolved in both pure water and brine are recorded in Table 3. For the initial samples at time zero, it is clear that the degree of hydrolysis rises with temperature for both sets of samples. The degree of hydrolysis for the brine dataset is systematically higher than that for the samples prepared in pure water.
To convert the percentage absorbance change into degree of hydrolysis, the percentage increases in amide groups and the initial degree of hydrolysis (DH) are combined and the final equation is expressed in Equation (4).
=
100 + %CONH2 100 × where DHt is the degree of hydrolysis at each designated time, DHi is the initial degree of hydrolysis at day zero from the 1 H NMR analysis, and %CONH2 is the percentage change in absorbance. Figure 6a,b show the final degree of hydrolysis of PAM plotted as a function of ageing time in pure water and brine, respectively. Using a similar approach, the degrees of hydrolysis of PAM in the presence of ions against ageing time for the temperatures studied are demonstrated in Figure 6b. The degree of hydrolysis in the brine solutions is systematically higher than that observed in the samples prepared in pure water. For instance, degrees of hydrolysis of about 39%, 46%, and 56% at 50 °C, 70 °C, and 90 °C, respectively, for PAM solution in pure water increased after ageing up to 30 days to values of about 52%, 65%, and 74%, respectively, in brine solution.
All records show two distinct phases of hydrolysis. In phase one (0-1 day), the degree of hydrolysis rises very quickly. In phase two, the rate of change in degree of hydrolysis is reduced significantly and the trend takes a linear form.
Rheological Characterization of PAM
The viscosity of the displacing fluid is a necessary criterion for screening an injection fluid during chemical flooding, because an optimal viscosity of the displacing fluid is required to ensure favourable mobility so as to achieve better oil recovery with good injectivity at lower cost 1 . The viscous behaviour of PAM at different temperatures and salinities was measured in terms of its thermal stability, where the estimation detected two types of rheological behaviour: thixotropic and pseudo-plastic.
Time-Dependent Effects on Thermal Stability of PAM Viscosity
As discussed in Section 3.1, PAM solutions at high temperature experience a higher degree of hydrolysis, leading to an increase in anionicity that could result in instability in the behaviour of the gel. When the degree of hydrolysis is above 33%, polymer degradation or precipitation may occur at high temperatures [18]. PAM exhibits thixotropic behaviour where solution viscosity tends to decrease at a constant shear rate during some extended test period [28]. Figure 7 presents the measured viscosity levels of PAM solution at 50 °C in the presence and absence of brine for low shear rates of 10 and 30 rpm and high shear rates of 600 rpm. The results show a steady decrease in viscosity for the lower shear rates, while the decrease is sharper for higher shear rates. It is worth mentioning that, at higher shear rates, much of the decrease in viscosity occurred after 10 days of ageing and, after that, no significant change in viscosity was observed. Using a similar approach, the degrees of hydrolysis of PAM in the presence of ions against ageing time for the temperatures studied are demonstrated in Figure 6b. The degree of hydrolysis in the brine solutions is systematically higher than that observed in the samples prepared in pure water. For instance, degrees of hydrolysis of about 39%, 46%, and 56% at 50 • C, 70 • C, and 90 • C, respectively, for PAM solution in pure water increased after ageing up to 30 days to values of about 52%, 65%, and 74%, respectively, in brine solution.
All records show two distinct phases of hydrolysis. In phase one (0-1 day), the degree of hydrolysis rises very quickly. In phase two, the rate of change in degree of hydrolysis is reduced significantly and the trend takes a linear form.
Rheological Characterization of PAM
The viscosity of the displacing fluid is a necessary criterion for screening an injection fluid during chemical flooding, because an optimal viscosity of the displacing fluid is required to ensure favourable mobility so as to achieve better oil recovery with good injectivity at lower cost 1 . The viscous behaviour of PAM at different temperatures and salinities was measured in terms of its thermal stability, where the estimation detected two types of rheological behaviour: thixotropic and pseudo-plastic.
Time-Dependent Effects on Thermal Stability of PAM Viscosity
As discussed in Section 3.1, PAM solutions at high temperature experience a higher degree of hydrolysis, leading to an increase in anionicity that could result in instability in the behaviour of the gel. When the degree of hydrolysis is above 33%, polymer degradation or precipitation may occur at high temperatures [18]. PAM exhibits thixotropic behaviour where solution viscosity tends to decrease at a constant shear rate during some extended test period [28]. Figure 7 presents the measured viscosity levels of PAM solution at 50 • C in the presence and absence of brine for low shear rates of 10 and 30 rpm and high shear rates of 600 rpm. The results show a steady decrease in viscosity for the lower shear rates, while the decrease is sharper for higher shear rates. It is worth mentioning that, at higher shear rates, much of the decrease in viscosity occurred after 10 days of ageing and, after that, no significant change in viscosity was observed. The low shear rate at 10 rpm leads to a steady fall in viscosity over time, while at 30 rpm, a similar trend is observed but with a slightly larger fall in viscosity. The high shear rate at 600 rpm gives a much larger overall fall in viscosity, which seems to take place in two stages. Accordingly, in stage 1, a rapid fall over the first 10 days starts to level off over the remaining time, and this is suggestive of shear thinning behaviour and the collapse of the gel structure.
The same approach was adopted for the higher temperatures, and the data are reported in Figures 8 and 9. The low shear rate at 10 rpm leads to a steady fall in viscosity over time, while at 30 rpm, a similar trend is observed but with a slightly larger fall in viscosity. The high shear rate at 600 rpm gives a much larger overall fall in viscosity, which seems to take place in two stages. Accordingly, in stage 1, a rapid fall over the first 10 days starts to level off over the remaining time, and this is suggestive of shear thinning behaviour and the collapse of the gel structure.
The same approach was adopted for the higher temperatures, and the data are reported in Figures 8 and 9. The low shear rate at 10 rpm leads to a steady fall in viscosity over time, while at 30 rpm, a similar trend is observed but with a slightly larger fall in viscosity. The high shear rate at 600 rpm gives a much larger overall fall in viscosity, which seems to take place in two stages. Accordingly, in stage 1, a rapid fall over the first 10 days starts to level off over the remaining time, and this is suggestive of shear thinning behaviour and the collapse of the gel structure.
The same approach was adopted for the higher temperatures, and the data are reported in Figures 8 and 9. As can be seen from Figures 7-9, the rate of fall in viscosity increased with temperature for the PAM solutions in the presence of brine. At 90 • C, the transition to stage 2 behaviour occurred at 30 rpm, indicating that gel collapse is accelerated at higher temperatures and lower rotational speeds. As can be seen from Figures 7-9, the rate of fall in viscosity increased with temperature for the PAM solutions in the presence of brine. At 90 °C, the transition to stage 2 behaviour occurred at 30 rpm, indicating that gel collapse is accelerated at higher temperatures and lower rotational speeds.
Percentage Change in Viscosity of PAM Solution
In the application of polyacrylamide in oilfields, it is of primary importance to ensure that the polymer solution remains effective over long periods at different temperatures. To determine the loss of viscosity of the PAM solution, combined operational conditions of shear rate, time, brine content, and temperature (50 °C, 70 °C, and 90 °C) were analyzed using Equation (5): where V0 is the initial polymer solution viscosity before ageing in the oven, and Vn is viscosity at different ageing times. These results are shown in detail in Figure 10. At all temperatures and rotational speeds, it was observed that the longer the ageing time of PAM solution in either pure water or brine, the greater the loss of PAM viscosity. For instance, at the higher temperature of 90 °C and with 30 days of ageing, the loss of viscosity was between 84-78% and 77-71% for PAM mixed in brine and PAM mixed in pure water, respectively. Meanwhile, at a lower temperature of 50 °C, the losses in viscosity were 69-66% and 61-58%, respectively.
Percentage Change in Viscosity of PAM Solution
In the application of polyacrylamide in oilfields, it is of primary importance to ensure that the polymer solution remains effective over long periods at different temperatures. To determine the loss of viscosity of the PAM solution, combined operational conditions of shear rate, time, brine content, and temperature (50 • C, 70 • C, and 90 • C) were analyzed using Equation (5): where V 0 is the initial polymer solution viscosity before ageing in the oven, and V n is viscosity at different ageing times. These results are shown in detail in Figure 10. At all temperatures and rotational speeds, it was observed that the longer the ageing time of PAM solution in either pure water or brine, the greater the loss of PAM viscosity. For instance, at the higher temperature of 90 • C and with 30 days of ageing, the loss of viscosity was between 84-78% and 77-71% for PAM mixed in brine and PAM mixed in pure water, respectively. Meanwhile, at a lower temperature of 50 • C, the losses in viscosity were 69-66% and 61-58%, respectively.
To investigate the efficiency of PAM performance in different operational conditions, the influence of shear rate on viscosity retention needed to be checked. Tests were conducted at shear rates of 17, 51, and 1021 s −1 for temperatures of 50, 70, and 90 • C using Equation (6): where V 0 = initial viscosity, and V t = viscosity at each time interval.
The results for viscosity retention at 50 • C, 70 • C, and 90 • C of PAM mixed with pure water and with brine under the conditions studied are presented in Figure 11. The highest degree of viscosity retention is observed for 50 • C followed by 70 • C and 90 • C. The lower the shear rate of the PAM, the better the retention of viscosity in the solution. The presence of brine in the polymer solution led to lower retention of viscosity. The minimum retention arose in the case where PAM was mixed with brine and experienced the highest shear rate of 600 rpm for the longest ageing time of 30 days at a higher temperature. To investigate the efficiency of PAM performance in different operational conditions, the influence of shear rate on viscosity retention needed to be checked. Tests were conducted at shear rates of 17, 51, and 1021 s −1 for temperatures of 50, 70, and 90 °C using Equation (6): where V0 = initial viscosity, and Vt = viscosity at each time interval.
The results for viscosity retention at 50 °C, 70 °C, and 90 °C of PAM mixed with pure water and with brine under the conditions studied are presented in Figure 11. The highest degree of viscosity retention is observed for 50 °C followed by 70 °C and 90 °C. The lower the shear rate of the PAM, the better the retention of viscosity in the solution. The presence of brine in the polymer solution led to lower retention of viscosity. The minimum retention arose in the case where PAM was mixed with brine and experienced the highest shear rate of 600 rpm for the longest ageing time of 30 days at a higher temperature. To investigate the efficiency of PAM performance in different operational conditions, the influence of shear rate on viscosity retention needed to be checked. Tests were conducted at shear rates of 17, 51, and 1021 s −1 for temperatures of 50, 70, and 90 °C using Equation (6): where V0 = initial viscosity, and Vt = viscosity at each time interval.
The results for viscosity retention at 50 °C, 70 °C, and 90 °C of PAM mixed with pure water and with brine under the conditions studied are presented in Figure 11. The highest degree of viscosity retention is observed for 50 °C followed by 70 °C and 90 °C. The lower the shear rate of the PAM, the better the retention of viscosity in the solution. The presence of brine in the polymer solution led to lower retention of viscosity. The minimum retention arose in the case where PAM was mixed with brine and experienced the highest shear rate of 600 rpm for the longest ageing time of 30 days at a higher temperature.
Shear Dependence of the Viscosity of PAM Solution (Mechanical Degradation)
Mechanical degradation through shearing devices can lead to significant reductions in viscosity described as a reversible decrease in viscosity with increasing shear rate (shear thinning). This behaviour of the polymer gel is known as pseudo-plastic behaviour, and it can occur when PAM solution is injected into a reservoir from the injection well [28][29][30]. The flow regime usually changes as the solution flows first through the wide well-bore and finally to the reservoir. Because fluid viscosity is a function of shear rate, the viscosity of the solution will also change from where it is near the well-bore compared with in the depths of a reservoir. The pseudo-plastic behaviour of PAM may also occur in pipes and through chokes, valves, or pumps.
To determine the shear dependence of the viscosity of PAM solution when the polymer is added to a water-flooding operation, tests were conducted at 50, 70, and 90 • C and at shear rates of 5, 10, 17, 51, 170, 340, 510, and 1021 s −1 . The results are shown in detail in Figures 12-14. At low shear rates, the viscosity of PAM solution is reduced less compared with at high shear rates. However, the reduction in viscosity at different shear rates proved to be affected by ageing time as well as temperature. At low temperatures and low shear, the rate of viscosity reduction was slower in contrast to at higher temperature and high shear rate. Therefore, from the rheological measurements of fluid flow described above, it is recommended that polymer flooding is performed under laminar flow conditions and at a lower operational temperature, as shearing forces and elevated temperature may degrade the polymer, and hence suppress its performance.
described as a reversible decrease in viscosity with increasing shear rate (shear thinning). This behaviour of the polymer gel is known as pseudo-plastic behaviour, and it can occur when PAM solution is injected into a reservoir from the injection well [28][29][30]. The flow regime usually changes as the solution flows first through the wide well-bore and finally to the reservoir. Because fluid viscosity is a function of shear rate, the viscosity of the solution will also change from where it is near the well-bore compared with in the depths of a reservoir. The pseudo-plastic behaviour of PAM may also occur in pipes and through chokes, valves, or pumps.
To determine the shear dependence of the viscosity of PAM solution when the polymer is added to a water-flooding operation, tests were conducted at 50, 70, and 90 °C and at shear rates of 5, 10, 17, 51, 170, 340, 510, and 1021 s −1 . The results are shown in detail in Figures 12-14. At low shear rates, the viscosity of PAM solution is reduced less compared with at high shear rates. However, the reduction in viscosity at different shear rates proved to be affected by ageing time as well as temperature. At low temperatures and low shear, the rate of viscosity reduction was slower in contrast to at higher temperature and high shear rate. Therefore, from the rheological measurements of fluid flow described above, it is recommended that polymer flooding is performed under laminar flow conditions and at a lower operational temperature, as shearing forces and elevated temperature may degrade the polymer, and hence suppress its performance. Mechanical degradation through shearing devices can lead to significant reductions in viscosity described as a reversible decrease in viscosity with increasing shear rate (shear thinning). This behaviour of the polymer gel is known as pseudo-plastic behaviour, and it can occur when PAM solution is injected into a reservoir from the injection well [28][29][30]. The flow regime usually changes as the solution flows first through the wide well-bore and finally to the reservoir. Because fluid viscosity is a function of shear rate, the viscosity of the solution will also change from where it is near the well-bore compared with in the depths of a reservoir. The pseudo-plastic behaviour of PAM may also occur in pipes and through chokes, valves, or pumps.
To determine the shear dependence of the viscosity of PAM solution when the polymer is added to a water-flooding operation, tests were conducted at 50, 70, and 90 °C and at shear rates of 5, 10, 17, 51, 170, 340, 510, and 1021 s −1 . The results are shown in detail in Figures 12-14. At low shear rates, the viscosity of PAM solution is reduced less compared with at high shear rates. However, the reduction in viscosity at different shear rates proved to be affected by ageing time as well as temperature. At low temperatures and low shear, the rate of viscosity reduction was slower in contrast to at higher temperature and high shear rate. Therefore, from the rheological measurements of fluid flow described above, it is recommended that polymer flooding is performed under laminar flow conditions and at a lower operational temperature, as shearing forces and elevated temperature may degrade the polymer, and hence suppress its performance.
Conclusions
Based on experimental studies of the effectiveness of polyacrylamide (PAM) in different operational conditions, the following conclusions can be drawn: (1) The degree of hydrolysis of the amide groups of the PAM polymer increases with temperature. As the polymer ages, an increase in hydrolysis takes place in two stages. In stage one, a rapid rise in hydrolysis is seen over the first day. In stage two, the rate of hydrolysis is reduced and the increase is a linear function of time. With the addition of brine, a similar pattern is seen but the rate of hydrolysis in the first stage is increased.
(2) The highest viscosity retention is observed at the lowest temperature. The lower the shear rate, the better the retention of viscosity in the solution. The addition of salt to the solution leads to a decreased retention of viscosity. The minimum retention occurs in the case where PAM is mixed with brine and experiences the highest shear rate of 600 rpm for the longest ageing time of 30 days at higher temperature.
(3) At high shear rates, the rate of fall in viscosity appears to occur in two stages, with an initial rapid fall followed by a very slow reduction after about 10 days. This suggests that the nature of the gel changes after 10 days and the gel collapses.
(4) At higher temperatures, the onset of the collapse of the gel occurs at lower shear rates. The increase in degree of hydrolysis occurs faster than the reduction in viscosity. This suggests that the degree of hydrolysis is not the only factor governing the viscosity of the gel. | 9,181.2 | 2018-08-22T00:00:00.000 | [
"Engineering"
] |
Chiral dynamics of / with unstable particles
We review the achievements of the project C.9. Topics addressed include the magnetic moment of the ρ-meson, elastic pion-nucleon scattering in various settings of chiral perturbation theory, the reaction πN → ππN as well as new directions in chiral perturbation theory.
Introduction
The project C.9 has been funded over the third period of the SFB/TR 16.It grew out of the project C.3, that was funded in the first period.Within this project, we have carried out detailed investigations of pion-nucleon scattering using various formulations of baryon chiral perturbation theory (ChPT) and addressed the role of the Δ(1232) isobar.Furthermore, we have explored the possibility of extending the applicability of ChPT beyond the lowenergy region, verified the consistency of the complexmass renormalization scheme at the two-loop level and performed a number of applications in the two-nucleon sector.This project has been quite successful which is partly due to the intelligent use of the lump sum funds.
Our contribution is organized as follows.In section 2, we present the calculation of the pion-mass dependence of the magnetic moment of the ρ-meson.Sections 3 and 4 are devoted to elastic πN scattering and the reaction πN → ππN, respectively.Next, in section 5, we discuss new directions in ChPT focusing, in particular, on the applicability of the complex-mass renormalization scheme beyond the one-loop level and a possibility to extend the ChPT framework beyond the low-energy region.Finally, section 6 describes the applications in the nucleon-nucleon (NN) sector.Here, we discuss the electromagnetic form factors of the deuteron, a non-perturbative inclusion of the subleading contact interaction in the 1 S 0 channel within a renormalizable framework and new fixed points of the renormalization-group equation.loop order.To this aim, we have employed the most general chiral effective Lagrangian for ρand ω-mesons, pions and external sources where the individual terms are taken in the parametrization of the model III of Ref. [2].For a general review of effective chiral Lagrangians with vector mesons, see Ref. [3].We made use of the universality of the vector-meson couplings, which was shown in Ref. [4] to follow from the consistency of an effective field theory (EFT) with respect to perturbative renormalization.Furthermore, the coupling constant g ρππ obeys the KSFR relation M2 ρ = 2g 2 ρππ F 2 π , with F π denoting the pion decay constant.
The magnetic moment of the ρ-meson can be extracted from the resonant part of the five-point function in the reaction ππ → γππ as visualized in Fig. 1.Assuming that the interaction terms with a higher number of derivatives and/or more fields are suppressed by powers of some large hadronic scale, we have performed a one-loop calculation in terms of the expansion parameter ξ = g 2 ρππ /(4π) 2 0.2.At tree level, the magnetic moment is found to be real, and the gyromagnetic ratio is g ρ = 2. One-loop diagrams yield corrections of the order of 10% and lead to a small imaginary part of g ρ .The resulting pion mass dependence of g ρ is shown in Fig. 2. At the physical point, we found Re(g ρ ) 2.2 in good agreement with the most recent lattice QCD calculation of Ref. [5], (g ρ ) unquenched = 2.4(2).Earlier lattice QCD studies have reported the values of (g ρ ) quenched ∼ 2.3 [6], (g ρ ) unquenched = 1.6(1) [7].
In our paper [19], we have analyzed πN scattering up to fourth order Q 4 within different formulations of ChPT.Specifically, we employed the heavy-baryon (HB) formulations utilizing the standard counting of the nucleon mass as m N ∼ Λ b (HB-πN) with Λ b denoting the breakdown scale of the chiral expansion, and the counting scheme with m N ∼ Λ 2 b /Q (HB-NN) employed in few-nucleon studies.We have also performed calculations within a covariant (Cov) formulation of ChPT based on an extended on-mass-shell (EOMS) renormalization scheme [20][21][22].Our analysis differs in several aspects from the already mentioned earlier studied of this topic.First, the determination of the low-energy constants (LECs) was carried out using directly the available πN scattering data instead of having to rely on partial wave analyses (PWA), see also Ref. [23] for a similar approach.Secondly, we have benefited from the developments in the project B.6 by employing a novel approach to estimate the theoretical uncertainty from the truncation of the chiral expansion formulated in Ref. [17] in order to stabilize the fits with respect to the energy range.Finally, we have discussed in detail the renormalization and finite shifts of the various LECs within the covariant formulations and studied the role of the Δ(1232) isobar by treating it as an explicit degree of freedom at lowest order in the chiral expansion.
The main results of our study can be summarized as follows.In the Δ-less case, all three considered approaches are found to lead to a comparable description of lowenergy πN data, and the extracted values of the LECs compare well with those reported in the literature at the cor- for the HB-NN/HB-πN/Cov counting.The theoretical uncertainty is estimated using the approach of Ref. [17] and assuming Λ b ∼ 600 MeV.responding chiral orders.While the inclusion of the theoretical uncertainties in the definition of the χ 2 helps to stabilize the fits by decreasing the relative weight of highenergy data, we found clear indications of the lower breakdown scale of the chiral expansion than the assumed one of Λ b ∼ 600 MeV [17].This conclusion is supported by the increasing behavior of the reduced χ 2 with energy as visualized in the left panel of Fig. 3.It is also reflected in the extracted values of the LECs, which show sizable sensitivity on the energy range used in the fits, see the upper panel of Fig. 4, and to the chiral order.Furthermore, some of the LECs take rather large numerical values.For example, at order Q 4 in the covariant approach, we found the values of Clearly, one expects the convergence of the chiral expansion for πN scattering to be slow due to the appearance of the Δ(1232) resonance.The situation can be improved by treating the Δ-isobar as an explicit degree of freedom [24][25][26].To quantify the effects of the Δisobar, we have repeated our analysis including its lowestorder contributions.The only new parameters, namely the , respectively.The truncation errors are estimated using the approach of Ref. [17] and assuming Λ b ∼ 600 MeV.Blue points show the results of the RS equation analysis of Ref. [27].mass of the Δ and the πNΔ coupling constant, were fixed to its Breit-Wigner and the large-N c values, respectively: While the quality of the reproduction of the πN scattering data well below the Δ-region is not significantly impacted by the explicit inclusion of the Δ-isobar, we indeed found an improved convergence of the resulting EFT.The corresponding breakdown scale is consistent with or even slightly larger than the estimated Λ b ∼ 600 MeV, see the right panel of Fig. 3.Not surprisingly, the Δ-full formulation allows for a more reliable determination of the LECs, which turn out to be rather stable with respect to choosing the fitting range, see Fig. 4. We also found all LECs in the Δfull approach to be of a natural size.In particular, at order Q 4 + δ 1 in the covariant approach, we obtained the values for the order-Q 2 LECs and for the relevant order-Q 3 πN LECs, to be compared with the values in the Δ-less approach listed above.
Last but not least, the resulting phase shifts are found to be in a very good agreement with the ones determined within the project B.3 in the framework of Roy-Steiner (RS) equations, where detailed error estimates of all input quantities, the solution procedure and truncations was performed [27][28][29].As a representative example, we show in in Fig. 5 our predictions for selected S-and P-waves along with the estimated truncation errors in comparison with the results from Ref. [27].
While the above results look rather promising and indicate the importance of the explicit treatment of the Δisobar, its inclusion beyond the tree level in the covariant [27] while circles without error bars represent the phase shifts from the George Washington partial wave analysis [32].The solid (red) line represents the result of Fit III of Ref. [30].The red narrow error bands correspond to the uncertainties propagated from the errors of LECs while the wide dashed error bands show truncation uncertainties at order Q 3 estimated using the approach of Ref. [17].
framework is a rather challenging task due to a large complexity of the calculations emerging from the Δ propagators.In Ref. [30], we have performed a complete analysis at the leading one-loop order (i.e.Q 3 ) in the covariant chiral EFT with explicit Δ degrees of freedom.In this work, all possible nucleon-and Δ-propagator contributions have been calculated up to the leading one-loop order.To achieve this goal, we applied the EOMS (complexmass [31]) scheme to diagrams involving pion and nucleon lines (delta lines) and determined the corresponding renormalized LECs by performing fits to the phase shifts obtained in the RS equation analysis of Ref. [27] up to the center of mass energy of 1.11 GeV (1.2 GeV for the P 33 partial wave).In Fig. 6, we show our results for the selected partial waves along with the theoretical uncertainties.
We have also looked at the πN sigma term but were unable to obtain an accurate prediction for this quantity.Our results are, however, consistent with the value found in the RS analysis, σ πN = 59.1 ± 3.5 MeV within the large errors of our determination.We have also extended these studies to the SU(3) sector by calculating the masses and the corresponding sigma terms of the baryon octet and studying the strangeness content of the nucleon.
The reaction πN → ππN
In addition to the elastic channel of πN scattering, we have studied in Ref. [33] single pion production off nucleons from threshold up to the delta resonance region, extending earlier ChPT studies from Refs.[34][35][36][37].With three pions in the initial and final states, the theoretical study of this process provides complimentary information to elastic scattering and offers access to certain LECs which are not probed in the reaction πN → πN.In particular, the LEC d 16 is of great interest as it governs the quark mass dependence of the axial charge of the nucleon.If fact, the lack of knowledge of the precise value of d 16 is one of the main sources of theoretical uncertainty in chiral extrapolations of nuclear observables [38][39][40][41][42][43][44].Notice further that a good theoretical understanding of this process is expected to provide useful constraints for multi-channel PWA of γand π-induced reactions off the nucleon, which is the main task of the project A.2.
In our exploratory study, we have restricted ourselves to calculating the leading (LO), i.e.Q, and subleadingorder (NLO), i.e.Q 2 contributions to the scattering amplitude which emerge from tree-level diagrams.We have used both the HB and Cov formulations of baryon chiral EFT with and without explicit Δ degrees of freedom.Using the large-N c value for the πNΔ coupling constant, the results at LO come as parameter free predictions.At NLO, several LECs have to be determined.For the LECs c i from L (2) πN , we have adopted the values extracted from two different PWA of elastic πN scattering.Thus, our Δless results at NLO are parameter free, too.In the HB Δ-full approach, one is left with three unknown LECs, namely b 4 , b 5 from L (2) πNΔ and g 1 from L (1) πΔ .In the covariant framework, one, in addition, encounters the contributions from the LECs b 3 and b 6 .These LECs have been determined from the available total cross section data in the reactions With all LECs being determined as described above, we have looked at various differential observables.As a representative example, we show in Fig. 8 our predictions for the angular correlation function W, see Ref. [33] for a precise definition.Here θ 1,2 denote the angles between the momentum of the incoming and the two outgoing pions, while φ 2 is the azimuthal angle of one of the outgoing pi- ons1 , see Ref. [33] for more details.Generally, the explicit treatment of the delta isobar is found to result in a better description of the data, most notably of the π − p → π 0 π 0 n and π − p → π + π − n total cross sections at higher energies and most of the differential cross sections.Still, certain single-and double-differential cross sections could not be properly described at this order in the chiral expansion.This calls for extending these calculations to next-higher order.Work along this line is in progress.
Complex-mass renormalization scheme at the two-loop level
ChPT can be straightforwardly extended to include meson and baryon resonances as dynamical degrees of freedom for soft processes, i.e. for reactions with external pion momenta and three-momenta of the nucleons of the order of M π , see e.g. the already discussed applications of Δ-full formulations of ChPT.On the other hand, formulating a consistent EFT at higher energy to study e.g. the properties of (axial-) vector mesons is a challenging task, see [1,49,50] and references therein for the first steps along this line using the complex-mass renormalization scheme [31].It is, however, not a priori clear that this scheme is applicable beyond the one-loop level, i.e. that the renormalized expressions fulfill the power counting.
To clarify this issue, we have studied in Ref. [51] the two-point function of a heavy scalar Ψ-meson, which can decay into three light scalar fields π, at the two-loop level, see diagrams (a-c) in Fig. 9.The restriction to a simple model has allowed us to avoid complications due to the spin and chiral structure of the EFT which are not relevant for the discussion of the applicability of the complexmass scheme.By explicitly calculating the diagrams a)c) in Fig. 9, with the one-loop counterterm being determined from πΨ scattering graphs d)-f), and by employing the complex-mass scheme, we were able to explicitly show the absence of power-counting-breaking terms in the final renormalized expressions for the self-energy.Our work opens the way for a systematic analysis of e.g. the ω-meson in chiral EFT.
Baryon chiral perturbation theory beyond the low-energy region
In Ref. [52], we have formulated a new approach to baryon ChPT which is applicable beyond the low-energy region at small scattering angles and applied it to the reaction πN → πN.Specifically, instead of considering the kinematical region of t = O(Q 2 ) and ν ≡ (s − u)/(4m N ) = O(Q) with s, t and u being the Mandelstam variables as it is usually the case in ChPT, we have demonstrated that the amplitude can be systematically calculated in the region of t = O(Q 2 ) and ν 2 − μ 2 = O(Q) even for ν = O(1).In this kinematical region, despite the higher energies, the quark structure of hadrons is still not resolved.
To achieve this goal, we have re-arranged the effective chiral Lagrangian according to a new power counting and exploited the freedom in the choice of renormalization conditions for loop diagrams.The re-arranged effective Lagrangian at any given order in the new counting contains a finite number of terms but involves some vertices of higher orders in the standard counting.Provided all the relevant resonances are treated as explicit degrees of freedom, the re-arranged effective Lagrangian generates a convergent Taylor series of the analytic part of the amplitude.Notice that the LECs of the re-arranged Lagrangian differ from the original ones by (known) μ-dependent shifts.
To ensure that loop diagrams obey the new power counting, we have employed a generalization of the EOMS scheme by shifting the renormalization point into the physical region above the threshold.More precisely, the loop integrals are subtracted at the nucleon momentum p 2 = m 2 N + 2m N μ.Since this procedure results in subtracting also the imaginary parts of the integrals, the renormalized LECs become complex in our scheme for μ > 0 as visualized in Fig. 10.Subnuclear Structure of Matter: Achievements and Challenges Figure 11.Electromagnetic form factors of the deuteron at LO in the renormalizable chiral EFT approach of Ref. [54] as a function of the momentum transfer q in comparison with experimental data from Refs.[60,61].Solid and long-dashed violet lines (short-and medium-dashed pink lines) show the predictions in the chiral (pionless) EFT with and without using phenomenological form factors of the nucleon, respectively.
As an application, we have used the resulting framework to analyze πN scattering at low energy. 2With the relevant LECs being fitted to the πN PWA, all phase shifts in the threshold region were shown to be well described within the new formulation, with the results being similar to those discussed in section 4. Our new approach is not limited to elastic πN scattering and is also applicable to pion photo-and electro-production reactions, Compton scattering and processes involving several pions.
6 Applications in the two-nucleon sector
Deuteron electromagnetic form factors in a renormalizable formulation of chiral EFT
In Ref. [53], we have analyzed the electromagnetic form factors G C , G M and G Q of the deuteron at LO within a renormalizable formulation of chiral nuclear EFT proposed in Ref. [54], see Refs.[55][56][57][58] and references therein for related earlier studies.This approach is based on a manifestly Lorentz invariant form of the effective Lagrangian without performing a non-relativistic expansion for the nucleon propagators.The NN amplitude can then be calculated by solving the corresponding threedimensional integral equation which satisfies relativistic elastic unitarity [59].The resulting scattering amplitude at LO in chiral EFT is renormalizable, which allows one to obtain regulator-independent results for observables.At LO, the form factors of the deuteron only receive contributions from single-nucleon currents, and the results do not involve any free parameters.In Fig. 11, we show by the violet long-dashed lines the resulting LO parameter-free cutoff-independent predictions for G C , G M and G Q .Notice that the deviations from the experimental at momentum transfers of q ∼ 200 MeV and higher can be largely traced back to the well-known slow convergence of the chiral expansion for the nucleon form factors [62,63].Indeed, replacing the LO single-nucleon current 2 To go beyond the threshold region, one would have to explicitly take into account the relevant contributions of the nucleon resonances.This task is postponed for future work.by the dispersion-theoretical form factor parametrization of Ref. [64] as shown by the solid lines in Fig. 11, a very good description of the data is observed even at rather high values of q.The resulting predictions for the deuteron magnetic and quadrupole moments, μ LO = 0.826(e/(2m)) and Q LO = 0.271 fm are in a good agreement with the experimental values of μ exp = 0.85741(e/(2m)) and Q exp = 0.2859 fm.
The renormalizable approach also provides a clear evidence of the importance of the one-pion exchange NN potential by comparing the LO results in π-less and π-full versions of the EFT as visualized in Fig. 11.Notice that the quadrupole form factor of the deuteron vanishes at LO in the π-less approach.
1 S 0 NN scattering in the modified Weinberg approach
While the LO calculations in the novel renormalizable approach to NN scattering of Ref. [54] look promising, the inclusion of higher-order corrections requires much more extensive calculations than in the standard, non-relativistic framework.In Ref. [65], we made a first step along this line by analyzing the subleading contact interaction in the 1 S 0 partial wave.Given that the singular tensor part of the one-pion exchange potential does not contribute in spinsinglet channels, it is possible to include the NLO contact interaction both perturbatively and without relying on perturbation theory.In the latter case, the scattering amplitude shows a residual dependence on the subtraction point μ.We have argued that the unnaturally large value of the scattering length in the 1 S 0 channel requires the subtraction scale μ corresponding to the LO contact interaction to be chosen of the order of the hard scale in the problem.
We have also addressed perturbativeness of the subleading contact interaction within our scheme and found it to be advantageous to treat it nonperturbatively at energies above E lab ∼ 50 MeV.Finally, we have demonstrated that the predictions for the low-energy theorems for the coefficients in the effective range expansion are strongly improved when going from LO to NLO.
New fixed points of the renormalization group for nonrelativistic two-body systems
The renormalization group (RG) is known to be a powerful tool to study the scale dependence of systems in various areas of physics.For a system of two nonrelativistic particles interacting via short-range forces, the nontrivial fixed point describing a system in the "unitary limit" and analyzed in Ref. [66] has attracted much interest in the context of NN scattering in the 1 S 0 and 3 S 1 channels.While the existence of other nontrivial fixed points of the RG equation for two-body scattering has been suggested in the literature, no explicit results were known.In Ref. [67], we have presented a general method for constructing an infinite number of possible fixed points by utilizing a separable ansatz for the potentials that arise in EFT description of such systems.Our results indicate a much richer structure than previously recognized in the RG flows of simple short-range potentials.
Figure 2 .
Figure 2. The gyromagnetic ratio g ρ of the ρ-meson as a function of the pion mass.
Figure 3 .
Figure 3. Reduced χ 2 for fits up to various values of the maximal energy T max π in the Δ-less (left panel) and Δ-full (right panel) formulations of ChPT.The blue/red/green bars denote the results for the HB-NN/HB-πN/Cov counting.The theoretical uncertainty is estimated using the approach of Ref. [17] and assuming Λ b ∼ 600 MeV.
Figure 4 .
Figure 4.The dependence of the order-Q 2 πN LECs c i , given in units of GeV −1 , on the maximal incoming pion energy T π used in the fits at order Q 4 in the Δ-less approach (upper panel) and at order Q 4 + δ 1 in the Δ-full approach (lower panel).The blue/red/green points and lines denote the results of the HB-NN/HB-πN/Cov formulations.
Figure 5 .
Figure 5. Predictions for selected S-and P-wave phase shifts up to T π = 100 MeV in the covariant Δ-full formulation of chiral EFT.Orange dotted, pink dashed and red solid bands refer toQ 2 + δ 1 , Q 3 + δ 1 and Q 4 + δ 1 results, respectively.The truncation errors are estimated using the approach of Ref.[17] and assuming Λ b ∼ 600 MeV.Blue points show the results of the RS equation analysis of Ref.[27].
Figure 6 .
Figure 6.Phase shifts obtained from the Δ-full covariant ChPT at leading one-loop order.Dots with error bars stand for the RS phase shifts[27] while circles without error bars represent the phase shifts from the George Washington partial wave analysis[32].The solid (red) line represents the result of Fit III of Ref.[30].The red narrow error bands correspond to the uncertainties propagated from the errors of LECs while the wide dashed error bands show truncation uncertainties at order Q 3 estimated using the approach of Ref.[17].
Figure 7 .
Figure 7. Predictions for the total cross section up to T π 400 MeV in the covariant Δ-full (left panel) and Δ-less (right panel) versions of chiral EFT.Dashed and solid lines refer to LO and NLO results, respectively.The maximal energy used in the fit at NLO is marked by the vertical dotted line.The bands at NLO reflect the uncertainty from using different partial wave analyses in the determination of the LECs c i as explained in the text.Experimental data are from Refs.[45][46][47].
Figure 8 .
Figure 8. NLO predictions for the angular correlation functions in the π − p → π + π − n channel at fixed values of θ 2 and θ 1 = 91 • (upper panel), θ 1 = 76 • (middle panel) and θ 1 = 71 • (lower panel) for √ s = 1301 MeV in the Δ-full (red lines) and Δ-less (cyan lines) covariant chiral EFT approaches.The bands at NLO reflect the uncertainty from using different partial wave analyses in the determination of the LECs c i as explained in the text.Experimental data are from Ref.[48].
Fig. 7
for representative examples.The extracted values of g 1 are found to be consistent with its large-N c value of g 1 = 2.27.We also found strong anticorrelations between the LECs b 4 and b 5 as well as b 3 and b 6 , which prevented a reliable determination of the linear combinations.
134 SubnuclearFigure 9 .
Figure 9. Diagrams a)-c): The two-point function at the twoloop order.Solid and dashed lines refer to the heavy and light particles Ψ and π, respectively.The vertices in graphs b) and c) denote one-loop-and two-loop-order counterterms.Diagrams d)-f): one-loop graphs contributing to Ψπ scattering.
Figure 10 .
Figure 10.The renormalized LECs c i in the generalized EOMS scheme as functions of the sliding scale μ. | 5,822.6 | 2017-01-01T00:00:00.000 | [
"Physics"
] |
Selective mode of action of plumbagin through BRCA1 deficient breast cancer stem cells
Background Studies over the past decade and half have identified cancer stem cells (CSCs) to be responsible for tumorigenesis, invasion, sustenance of metastatic disease, radio- and chemo-resistance and tumor relapse. Recent reports have described the plasticity of breast CSCs (BCSCs) to shift between the epithelial and mesenchymal phenotypes via Epithelial-Mesenchymal Transition (EMT) and Mesenchymal-Epithelial Transition (MET) states as the reason for their invasive capabilities. Additionally, BRCA1 has been found to be a mammary stem cell fate determinant. However, it is not clear what would be the best marker that can be used for identifying CSCs in BRCA1 mutated cancers. Also, anticancer agents that can reduce CSC population in a BRCA1 defective condition have not been addressed so far. Methods Putative BCSCs were identified based on Hoechst exclusion, CD44+/24–/low expression and Aldehyde Dehydrogenase 1 (ALDH1) positivity using flow cytometry. The ‘stemness’ of the isolated ALDH1+ cells were analysed by immunofluorescence, western blotting for stem cell and EMT markers as well as in vitro mammosphere assays. Induction of Reactive Oxygen Species (ROS) by Plumbagin (PB) in BCSCs was assayed by Dichloro-dihydro-fluorescein diacetate (DCF-DA) staining. Ovarian cancer xenografts treated with PB were subjected to immunohistochemical analysis to study the ability of PB to target CSCs. Results We have confirmed that ALDH1 positivity is the best marker for the identification of BCSCs in BRCA1-defective breast cancer cell lines when compared to the CD marker profile and Side Population (SP) analysis. BRCA1 status was observed to be a determinant of the abundance of epithelial-like (ALDH1+) or mesenchymal-like (CD44+/24–/low) BCSCs, and the reconstitution of a full length, wild type BRCA1 in HCC1937 breast cancer cells possessing a mutated BRCA1, transforms them from ‘stem-like’ to more ‘mesenchymal’. For the first time we have shown that Plumbagin (PB), a naturally occurring naphthoquinone which is predominantly a ROS inducer, could reduce BCSCs specifically in BRCA1-defective, basal-like cancer cells. Conclusions The best marker for identifying BCSCs in BRCA1 defective condition could be ALDH1 and that BRCA1 mutated BCSCs would be mostly ‘stem like’ than ‘mesenchymal’. Also ROS inducers like PB could reduce BCSCs in BRCA1 defective cancers. Electronic supplementary material The online version of this article (doi:10.1186/s12885-016-2372-4) contains supplementary material, which is available to authorized users.
Background
In 1858, Rudolf Virchow proposed the embryonal rest theory of cancer [1]. This was the first ever reference to the possibility that a stem cell that stayed dormant during embryonic development and for years later could eventually become a cancer initiator. Julius Conheim corroborated this in striated muscle sarcoma of the kidneys [2]. After 'resting' for almost a century and half since its first mention, the embryonal rest theory was revived as the CSC hypothesis and then began the hunt for biomarkers to identify CSCs. Once the existence of a therapy resistant population within a tumor that could possibly be the origin of tumorigenesis as well the reason for relapse was scientifically shown in leukemia [3], the Cancer Stem Cell hypothesis has been rapidly gaining support from various quarters. The existence of CSCs in breast cancer was first brought to light when it was found that the CD44 + CD24 −/low Lin − cells from breast cancer patients were able to generate tumors more efficiently than CD44 + CD24 + Lin − cells when implanted into the mammary fat pads of NOD/SCID mice [4]. Also, these cells exhibited the typical stem-like characteristics of self-renewal (surviving serial passage) and differentiation (reproducing the tumor heterogeneity as in the primary tumor). Subsequently, a number of markers, including ALDH1 were discovered for BCSCs with special reference to the different molecular sub-types with many of the markers being common to normal mammary stem cells as well as BCSCs [5][6][7][8][9][10][11][12][13][14]. However, most biomarkers till date have not stood the test of time. This, in addition to the difficulties of accurate xenograft studies have prevented the CSC hypothesis from evolving into a theory.
The best possible method would be to restrict a study to a single subtype of cancer and try to find markers specific to CSCs in the said cancer type as the dearth of currently known distinct, distinguishing features for identification of CSCs means that there is a long way to go in the identification (if at all) of specific CSC marker/s for every cancer type.
BRCA1-defective tumors represent a class of early onset, high grade, poor prognosis, often triple negative breast cancers (TNBCs) with no specific or selective treatment regimen [15]. BRCA1 is a transcriptional regulator and plays a cardinal role in Homologous Recombination Repair of damaged DNA [16][17][18]. Later, it was found to be important in embryonic development and was subsequently reported to be a determinant of the mammary stem cell fate [19,20]. However, the characteristics of BCSCs from BRCA1-defective cancers are largely unexplored. Earlier studies from our group have revealed the ability of PB (5hydroxy-2-methyl-1, 4-naphthoquinone) to selectively target BRCA1-defective ovarian cancer cells [21][22][23][24] as well as BRCA1-knockout breast cancer xenografts (unpublished data). Even though PB acts via multiple pathways to induce cytotoxicity in cancer cells, the major mode of action is the generation of ROS and subsequent DNA damage which could be especially deleterious to BRCA1-defective cancer cells that lack the homologous recombination repair machinery [21][22][23][24][25][26][27][28]. Therefore, in this study we analyzed the ability of PB to target BCSCs from BRCA1-defective cancers.
Here we report that ALDH1 positivity could be one of the best markers for the identification of BCSCs from basal-like, BRCA1-defective breast cancer cell lines based on the isolation of putative CSCs followed by their in vitro mammosphere culture. We have also shown for the first time that PB can target CSCs wherein the BCSCs derived from the cells possessing a reconstituted BRCA1, show an enrichment of ALDH1+ cells upon PB treatment. The BRCA1-defective HCC1937 shows a robust down regulation of ALDH1 positivity after treatment with sub-cytotoxic doses of PB. On the other hand, the standard drug carboplatin (CP) increased ALDH1+ cells in both the cell lines, which might be one of the reasons for relapse after CP treatment.
Cell viability assay
MTT assay to assess the effects of 48 h treatment with PB and CP on HCC1937 and HCC1937/wt BRCA1 was performed as described elsewhere [23]. 3-(4,5-Dimethyl thiazol-2-yl) 2,5-diphenyl tetrazolium bromide (MTT) was purchased from USB, Cleveland, OH, USA, PB and CP were from Sigma Aldrich, St. Louis, MO, USA. CP solutions were prepared in phosphate buffered saline (PBS) immediately before the experiment. PB was dissolved in Dimethyl Sulfoxide (DMSO) and the vehicle control (DMSO) did not show any effect on cell viability. All results are expressed as the percentage cell viability over control ± S.D. of quadruplicate determinations from three independent experiments.
Side population analysis
Cells were stained with 5 μg/ml of Hoechst 33,342 (Sigma Aldrich, St. Louis, MO, USA) for 90 min in a 37°C water bath. Cells stained with 5 μg/ml Hoechst in the presence of 50 μM Verapamil Hydrochloride (calcium channel blocker that prevents Hoechst efflux from cells) (Sigma Aldrich, St. Louis, MO, USA) was used as negative control. The cells were then stained with propidium iodide (PI) (2 μg/10 6 cells) for dead cell exclusion and sorted using BD FACSAr-iaII flow cytometer (Becton Dickinson, Franklin Lakes, NJ, USA). Hoechst 33,342 stained cells were analyzed using 350 nm excitation with blue (635 nm) and red (488 nm) emission. BD FACSDiva Software was used for analysis.
Surface marker profiling
Cells were stained with Mouse anti-human CD44-APC (C26) and CD24-FITC (ML5) antibodies (BD Biosciences (San Jose, CA., USA)) for 30 min on ice. Cells stained with CD44-APC and CD24-FITC separately (single color) were taken as the controls. The 633 nm and 488 nm lasers were used for excitation and 530/30 nm BP and 660/20 BP detectors were used for CD24-FITC and CD44-APC respectively. The CD44 + /24 -/low (putative stem cell) population was identified and analyzed using flow cytometry as mentioned above.
ALDEFLUOR assay
The ALDEFLUOR assay (The ALDEFLUOR Assay kit, Stem Cell Technologies (Durham, NC, USA)) was used to identify and isolate the ALDH+ cells from the various cell lines. Briefly, cells in growing conditions or treated with PB or CP for 48 h were suspended in ALDEFLUOR assay buffer containing ALDH substrate (BODIPY-aminoacetaldehyde or BAAA, 1 μmol/l per 1 × 10 6 cells) and incubated for 40 min at 37°C in a water bath to assess the ALDH enzymatic activity. As negative control, an aliquot was treated with 50 mmol/l diethylaminobenzaldehyde (DEAB), a specific ALDH inhibitor for each sample of cells. The ALDH1 positive (ALDH1+) and ALDH1 negative (ALDH1-) populations were sorted out by flow cytometry using the 488 nm laser and the 530/30 nm BP detector as mentioned above.
D culture of spheroids
In order to recapitulate the mammosphere data in a system that involves the extracellular matrix components as would be seen in vivo, cells were seeded at 10,000 cells/ml in RPMI with insulin (5 μg/ml /ml), hydrocortisone (1 μg/ ml), EGF (10 ng/ml) and 5 % Growth Factor Reduced Matrigel (GFRM) to 8-well chamber slides (BD Falcon, San Jose, CA, USA) pre-coated with 50 μl GFRM (BD Biosciences, San Jose, CA, USA). 0.5 μM PB was added to the cells. Cells were viewed on Day 5, Day 7, Day 10 and Day 15 using the Olympus IX71 microscope and photographed.
Immunofluorescence analysis
Day 7 mammospheres were washed with Phosphate Buffered Saline (PBS) and fixed in 4 % paraformaldehyde. Following permeabilization with 0.25 % Triton × 100, and blocking with 1 % Bovine Serum Albumin (BSA), spheres were incubated overnight at 4°C in the primary antibody. After PBS wash and incubation with fluorochrome tagged secondary antibody for 2 h, spheres were mounted in Prolong Gold Antifade with DAPI (Life Technologies, Carlsbad, CA, USA) and viewed using the Olympus IX71 microscope. Immunofluorescence analysis was performed with Rabbit anti-human Snail + Slug, rabbit anti-human β catenin (E247), rabbit anti-human α-SMA (Smooth Muscle Actin), and goat anti-human Oct 4 purchased from Abcam (Cambridge, MA, USA), goat anti-human Vimentin (C20), rabbit anti-human BRCA1, mouse anti-human ABCG2 (6D17) from Santa Cruz Biotechnology (Santa Cruz, CA, USA).
DCF-DA staining
ROS production in mammospheres was assessed from the levels of bright green colored 2′,7′-dichlorofluorescein (DCF), produced by the oxidation of DCF-DA (2′,7′-Dichlorodihydrofluorescein diacetate) dye by ROS induced in mammospheres after a 10 min treatment with 1 μM PB followed by exposure to 500 nM DCF-DA (Sigma Aldrich, St. Louis, MO, USA) (stock dissolved in Dimethyl formamide and further diluted in PBS) for 20 min in dark at 37°C. The green fluorescence was detected by the Leica DMI 6000B microscope at 495/529 nm.
Comet assay
A single gel electrophoresis method for comet assay with modifications as described by Olive et al., 2006 was done. Briefly, cells treated with PB for 4 h were lysed using neutral lysis buffer overnight at 37°C and seeded on to low melting agarose gel in a frosted slide. Then electrophoresis was done in Tris Borate EDTA buffer for 30 min at 50 V. The slides were then stained with propidium iodide and observed under an Olympus 1X71 fluorescent microscope for comets. The comets formed were scored using Casplab software.
Western blotting
Cell lysates were analyzed for expression of ABCG2, βcatenin and β-actin. Whole cell lysates were prepared in RIPA buffer supplemented with protease inhibitor cocktail. Supernatant was collected by centrifugation at 13,000 rpm for 25 min at 4°C. Samples were then subjected to sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE), transferred to nitrocellulose membranes and immu noblotted. Protein bands were observed by Enhanced Chemiluminescence detection (Amersham, Piscataway, NJ, USA) of the specifically bound antibody. The quantitation was done by densitometric analysis by "Quantity One" software.
Xenograft experiments
Six-week old female Severe Combined Immunodeficient (SCID) mice with tumors left untreated for 28 days were randomized into two groups (ten animals in a group). Group 1 was treated with 25 % polyethylene glycol (PEG) alone. Group 2 was treated with PB in 25 % PEG at doses of 1 mg/kg/day intraperitoneally. Tumors were measured twice a week, and primary tumor volumes were calculated using the formula V = 1/2a x b 2 , where 'a' is the longest tumor axis, and 'b' is the shortest tumor axis.
Immunohistochemical analysis (IHC)
After 3 weeks of treatment, all OVCAR-5 tumor-bearing mice were sacrificed by asphyxiation with CO 2 ; tumors were removed, measured, and prepared for IHC. Tumors were removed and fixed in neutral buffered 10 % formalin at room temperature for 24 h. Subsequently, the samples were embedded in paraffin and sections were taken. Sections were deparaffinized and then subjected to Oct 4 (Cell Signaling Technology, Beverly, MA), Vimentin (Clone V9, Chemicon International, Temcula, CA) and Ncadherin (Clone 13A9, Santa Cruz Biotechnology, Santa Cruz, CA) according to the manufacturer's instructions (DAB 150, (Millipore, Billerica, MA). The secondary antimouse antibody was used before adding chromogen substrate. Stable diaminobenzidine was used as a chromogen substrate, and the sections were counterstained with a hematoxylin solution. Photographs of the entire crosssection were digitized using an Olympus camera (DP70).
Statistical analysis
The independent-sample paired two tailed student t-test was used to test the probability of significant differences between different experimental groups. FACS results were expressed as mean ± S.D from at least three independent experiments. Statistical significance was defined as (*) p ≤ 0.05 and (**) p ≤ 0.005. Error bars were given on the basis of calculated S.D values. All experiments were repeated at least thrice.
Results
Intrinsic tumor subtype and BRCA1 status can determine the percentage of the BCSCs within cancer cells A comparison of the variations in the percentages of SP, CD44 + /24 -/low and ALDH1+ cells among breast cancer cell lines of different origins with varied receptor status has not been reported. Hence, we studied the effects of varied receptor status and possible effects of BRCA1 gene expression on these subcellular populations. The cells showing low staining at both 635 nm and 488 nm were identified as the Hoechst excluding side population (SP) cells. Hoechst excluding SP cells constituted less than 2 % of the total breast cancer cell population in four out of the five cell lines analyzed. The percentage of SP in HCC1937 (2.7 ± 0.42 %) was about 1.5 times higher than that in HCC1937/wt BRCA1 cells (1.85 ± 0.21 %) (p < 0.05) (Fig. 1a). The putative BCSCs were also identified in all the five cell lines and isolated based on the CD44 + /24 -/low marker profile as described previously [6,14]. Studies have suggested that the percentage of CD44 + /24 -/low putative CSCs present would vary between cancer cell lines and this percentage is determined by two main factors: the origin of the cell line and the receptor status [9]. Among the normal-like tumor forming cell lines, the BRCA1-mutated MDA-MB-436 harbored more CD44 + /24 -/low cells (95 ± 1.4 %) than MDA-MB-231 (90.67 ± 1.75 %) (Fig. 1b). Similarly, in basal-like tumor forming cell lines, the CD44 + /24 -/low cells were enriched in BRCA1-defective HCC1937 (52.7 ± 1.84 %) when compared to HCC1937/wt BRCA1 (30.6 ± 5.0 %) that possesses a wild type BRCA1. Thus, the CD44 + /24 -/low expression profile paralleled a BRCA1 defect though the general predominance exhibited tumor sub-type specificity with normal-like tumor forming cells (MDA-MB-436 and MDA-MB-231) possessing more CD44 + /24 -/low cells than the basal-like tumor inducing cell lines (HCC1937 and HCC1937/wt BRCA1).
A tumor sub-type independent increase in numbers of ALDH1+ cells was observed in BRCA1-mutated cell lines MDA-MB-436 (8.4 ± 0.2 %) and HCC1937 (6.9 ± 0.5 %) when compared to cells possessing a wild type BRCA1 . 1c). ALDH1+ cells from HCC1937 showed maximum sphere forming capability followed by MCF7. MDA-MB-231 and MDA-MB-436 formed comparable number of spheres after 7 days in culture while HCC1937/wt BRCA1 cells showed the least capability for in vitro mammosphere formation (Fig. 1d). The ALDH1 negative (ALDH1-) cells from HCC1937 and MDA-MB-436 were unable to form spheres in culture when compared to ADH1-cells isolated from MCF7, MDA-MB-231 and HCC1937/wt BRCA1 cell lines which formed a few spheres in non-adherent culture (. 1E). Hence, ALDH1, as a BCSC marker showed a specific association with BRCA1-defective breast cancers and BRCA1 has a cardinal effect on BCSC numbers and properties.
The inclusion of five cell lines with largely varied genotypes made it difficult to draw corroborating conclusions from confirmatory experiments. The two isogenic cell lines HCC1937 and HCC1937/wt BRCA1 that differed exclusively in the expression and functionality of BRCA1, showed significant differences in sphere forming capability (p < 0.005), a hallmark of CSCs. Hence, rest of the study is centered on these two TNBC cell lines.
BRCA1 can influence the EMT traits and proliferative potential of BCSCs
To further assess how BRCA1 influences cancer stemness, we compared and contrasted between the EMT and stem cell marker expression patterns in HCC1937 and HCC1937/wt BRCA1. HCC1937/wt BRCA1 cells showed the presence of a distinct CD44 high /24population (3.07 ± 1.12 %) within its CD44 + /24 -/low (30.6 ± 5.0 %) putative cancer stem cell population (Fig. 2a, Additional file 1: Figure S1). This population was not observed in HCC1937 cells (Fig. 2b). Studies have attributed the presence of such a population to a more mesenchymal phenotype that parallels EMT traits in cell lines [29,30]. A comparison of the expression of the EMT markers [Snail, Slug, Vimentin, α-SMA (mesenchymal marker)] and the stem cell markers (Oct 4 and β-catenin) indicated that mammospheres of HCC1937/wt BRCA1 exhibited a higher expression of the EMT markers while HCC1937 showed a higher expression of the stem cell markers (Fig. 2c). HCC1937 mammospheres displayed a membrane localization of β-catenin indicative of retention on cell membrane possibly prior to nuclear localization following activation of Wnt signaling, while HCC1937/wt BRCA1 mammospheres showed slight membrane as well as diffuse cytosolic expression. The cytoplasmic expression of β-catenin is indicative of the absence of Wnt signaling and hence induction of proteosomal degradation of the protein [31]. Additionally, β-catenin expression levels were also much higher in mammospheres of HCC1937 than HCC1937/wt BRCA1, while parental, adherent cells exhibited an opposite trend (Fig. 2c). These observations were corroborated by the microarray analysis of HCC1937 and HCC1937/wt BRCA1 mammospheres (Additional file 2: Table S1).
Plumbagin shows selective sensitivity for BRCA1 defective breast cancer cells
We now went on to study if PB that has been shown to the selective to BRCA1-defective ovarian cancer cells [23] could also selectively target BRCA1-defective basal-like breast cancer cells. Immunofluorescence and western blotting were used to analyze the expression and intracellular localization of BRCA1 in HCC1937 and HCC1937/ wt BRCA1 cell lines. We confirmed the low, cytosolic expression of BRCA1 in HCC1937 while in HCC1937/wt BRCA1, the BRCA1 protein was expressed and localized specifically to the nucleus (Additional file 3: Figure S2). This was expected as the truncated, non-functional BRCA1 expressed by HCC1937 has been observed in nucleus as well as the cytoplasm [32]. Cell proliferation assay revealed that PB is highly selective towards the HCC1937 cells with 5 μM concentration inhibiting proliferation in 56 % cells while in HCC1937/wt BRCA1 only 25 % inhibition of proliferation was observed (Fig. 3a, Additional file 2: Table S2). CP exhibited moderate selectivity towards BRCA1-defective cells as well, with 200 μM concentration causing 54 % growth inhibition in HCC1937/wt BRCA1 and 62 % inhibition in HCC1937 cells (Fig. 3b).
siRNA knockdown of BRCA1 showed that the HCC1937 cell line which possesses a C-terminus mutated BRCA1 did not show any difference in susceptibility to PB as assessed by MTT assay, while there was a prominent decrease in cell proliferation in the HCC1937/wt BRCA1 cells upon blocking BRCA1 (Additional file 4: Figure. S3). Thus, the activity of PB is closely linked to the BRCT domain 2 of BRCA1 that is absent in HCC1937 cell line.
Plumbagin can target BCSCs
To assess whether the ALDH+ 'putative CSCs' could be specifically targeted based on the presence or absence of functional BRCA1 protein, we analyzed the effects of PB on these populations of cells. In vitro studies with regard (See figure on previous page.) Fig. 1 ALDH1+ cells and not ALDH1-cells from BRCA1-defective breast cancer cell lines can form spheres. Graph showing the percentages of SP (a), CD44 + /24 -/low (b) and ALDH1+ (c) cells among the various breast cancer cell lines employed in this study. By all these three methods of identification, the CSC population was significantly higher in BRCA1-defective HCC1937 when compared to HCC1937/wt BRCA1. d and (e) A comparison of the in vitro sphere forming ability of ALDH1+ and ALDH1-cells from various cell lines indicating the inability of ALDH1-cells from BRCA1-defective cell lines to form spheres thus supporting that the ALDH1+ population from these cell lines entirely represents the BCSCs. Scale = 100 μm to the BRCA1-defective scenario showed that the number of ALDH1+ cells as well as mammosphere forming efficiency significantly reduced in a dose dependent manner after treatment with < IC 50/2 (1.0 μM) and < IC 50/4 (0.5 μM) concentrations of PB. HCC1937 cells showed a significant dose dependent reduction in sphere forming efficiency upon PB treatment when compared to HCC1937/wt BRCA1 cells at 0.5 μM (p = 0.0001) and 1.0 μM (p = 0.01) concentrations ( Fig. 3c and d). However, when the direct effect of PB on BCSC numbers was analyzed, HCC1937/wt c Marker profiling of mammospheres shows better expression of EMT markers (Snail, Slug, Vimentin) and mesenchymal marker (α-SMA) by HCC1937/wt BRCA1 cells. Expression of stem cell markers (Nuclear Oct 4 and membrane bound β-catenin) was more prominent in the HCC1937 cell line as observed by immunofluorescence as well as western blotting. Scale = 50 μm. "Ah" and "Mm" in western blot represents Adherent and Mammospheres respectively BRCA1 cells showed a surprising enrichment of the ALDH1+ cellular compartment upon PB treatment from 3.2 % in control cells to 8.1 % in 0.5 μM and 11.3 % in 1.0 μM concentrations of PB (Fig. 3e). A dose dependent reduction in sphere formation efficiency and ALDH1+ cell numbers was indicative of the ability of PB to specifically target BRCA1-defective BCSCs. A direct inhibition of sphere formation selectively in HCC1937 in 3D culture was observed when compared to HCC1937/wt BRCA1 which showed a slight, progressive increase in size of spheres from day 5 to day 15 (Fig. 4a).
Interstingly, the graph of Aldefluor assay in Fig. 3e shows that the HCC1937/wt BRCA1 CSCs are enriched by PB treatment. This observed increase in the ALDH1+ cells within the HCC1937/wt BRCA1 cell line post PB treatment, could not be correlated directly to ABCG2 expression as has been reported earlier [33] since HCC1937 expressed higher levels of ABCG2 than HCC1937/wt BRCA1 (Fig. 4b). Immunofluorescence analysis of PB treated mammospheres showed prominent membrane localization and marginally increased expression of ABCG2 in HCC1937/wt BRCA1 cell line. However, no such change was observed in the HCC1937 derived spheroids formed in presence of PB (Fig. 4c). Hence, the increase in ALDH1+ cells in HCC1937/wt BRCA1 cells could be attributed to the more robust membrane localization rather than increased expression of the drug efflux pump ABCG2 in comparison to HCC1937.
PB was found to induce reactive oxygen species (ROS) selectively in HCC1937 mammospheres while the ROS levels of HCC1937/wt BRCA1 spheres remained unaffected as evidenced by DCF-DA staining (Fig. 4d) indicating that the oxidative stress created was effectively taken care of and hence may not be cytotoxic to HCC1937/wt BRCA1 mammospheres. We also find that DNA DSBs are more in HCC1937 than in HCC1937/wt BRCA1 when treated with PB as DNA DSBs could not be repaired probably because of the absence of functionally active BRCA1 (Additional file 5: Figure S4).
A comparison of PB with the standard drug CP was also performed. The concentrations of CP used were ≤ IC 50/2 (100 μM) and ≤ IC 50/4 (50 μM). When HCC1937 and HCC1937/ wt BRCA1 cells were treated for 48 h and Aldefluor assay performed, there was a dramatic increase in the number of ALDH1+ cells to 27.3 % in the 50 μM treatment and 28.5 % in the 100 μM CP treatment (Fig. 4e) from 4.0 ± 0.9 % in untreated control HCC1937/wt BRCA1 cells. An enrichment of ALDH1+ cellular sub-population to 11.0 % was observed in HCC1937 cells treated with 100 μM CP for 48 h (Fig. 4e) from 5.4 ± 0.4 % in untreated condition.
Plumbagin can target CSCs in ovarian cancer bearing SCID mice
To assess if the CSC targeting effects of PB could be extended to ovarian cancers as well, we performed immunohistochemical analysis on ovarian cancer xenografts to study the expression of different EMT/stem cell markers. Earlier studies from our group as well as other groups have found PB to be effective in targeting ovarian cancer cells [24]. However, plumbagin to a lesser extent targets the BRCA1-competent cells as well. To investigate this further in ovarian CSCs, OVCAR-5 xenografts in NOD/ SCID mice were subjected to immunohistochemical analysis with antibodies for Oct 4, N-cadherin and Vimentin and it was found that PB treatment reduced the expression of all the three markers with the most prominent being the reduction in N-cadherin mesenchymal marker expression (Fig. 5) followed by Vimentin and Oct 4 expression. Hence, PB can putatively target ovarian CSCs by predominantly affecting the N-cadherin expressing cells and to a lesser extent by targeting the mesenchymal cells that express Vimentin and stem cells that express Oct 4. The studies on OVCAR5 xenografts provide preliminary evidence for the possibility of targeting CSCs with plumbagin. Goodell et al., 1996, showed that the low Hoechst staining population could be identified as a special, drug effluxing 'Side Population' (SP) that conferred drug resistance to cancer cells and was responsible for relapse [34]. Subsequently, side population cells from various cancers were isolated using the property of exclusion of the Hoechst 33,342 dye and all these isolated populations showed long-term proliferative potential and the ability to differentiate into the mesenchymal cell lineage [35,36]. The possible roles played by the drug efflux pumps in CSCs have also been elucidated with the drug resistance protein ABCB1 recently being implicated in the functioning of Renal Cell carcinoma SP cells [37]. Our results suggest the existence of SP cells with 1-2.7 % predominance across breast cancer (See figure on previous page.) Fig. 3 Cytotoxicity of PB is selective to BCSCs from BRCA1-defective breast cancer cell line. Effects of (a) Plumbagin and (b) Carboplatin on the proliferation of HCC1937 and HCC1937/wt BRCA1 cells. c PB reduces the sphere forming efficiency in terms of size and (d) number of spheres formed in a dose dependent manner in HCC1937 cells when compared to HCC1937/wt BRCA1 cells. Scale = 100 μm. e Reduction in the ALDH1+ population after PB treatment observed in HCC1937 cells while an enrichment of ALDH1+ stem cells upon PB treatment was observed in HCC1937/wt BRCA1 cells by flow cytometry and graph showing the variation in the percentages of ALDH1+ populations harbored by the breast cancer cell lines after treatment with ≤ IC 50/2 (1 μM ) and ≤ IC 50/4 (0.5 μM) concentrations of PB (See figure on previous page.) Fig. 4 BRCA1-defect specific effects of PB are attributable to production of ROS and weak membrane localization of ABCG2. a 15-day Matrigel culture of mammospheres derived from HCC1937 cells and HCC1937/wt BRCA1 cells shows the selective inhibition of sphere formation in HCC1937 in the presence of 0.5 μM PB. b the ABCG2 expression in these cell lines. c ABCG2 expression in HCC1937 and HCC1937/wt BRCA1 derived mammospheres formed in presence of varying concentrations of PB shows improved membrane localization of ABCG2 after PB treatment in HCC1937/wt BRCA1 mammospheres. Cells were treated with PB on day 1, mammospheres were supplemented with growth media without PB on day 4 and mammospheres were analyzed on day 7 (d) Variation in the production of ROS in PB treated spheres of HCC1937 and HCC1937/ wt BRCA1 cell lines observable as a bright green fluorescence by DCF-DA staining. e Aldefluor assay after 48 h treatment of HCC1937 and HCC1937/wt BRCA1 cells with sub-cytotoxic concentrations of CP shows a dose dependent enrichment of ALDH1+ cells in both the cell lines. Scale = 100 μm No direct correlation could be drawn between BRCA1 status and SP levels as other factors governing the genotype of the cells may be important determinants. Hence, SP analysis may not be the method for identification of BRCA1defective BCSCs across subtypes. However, a reduction in the SP cell numbers was observed when BRCA1 was introduced into HCC1937.
Discussion
It was found that very small cell numbers of the CD44 + /24phenotype were capable of initiating mouse mammary tumors [6,14]. The CD44 + /24 -/low cell population also showed the properties of self renewal and multi-lineage differentiation as well and thus, this phenotype of CD44 + /24was tagged as the putative breast cancer stem cell phenotype. Studies have suggested that the percentage of CD44 + /24 -/low putative CSCs present would vary between cancer cell lines and this percentage is determined by two main factors: the origin of the cell line and the receptor status [9]. When isolated from cancer tissues, the stem cell abundance is dependent on many more biological markers such as tumor size, histological grade, lymph node lesions, receptor status and molecular subtype of the tumor [33,38,39]. A comparison of the sub-cellular percentages of these populations in five different cell lines brought to light the fact that the molecular subtype of the tumor induced by the cell lines and the BRCA1 status were important in determining the CD44 + /24 -/low and the ALDH1+ cell population numbers respectively. This is in addition to the already known determinant namely, the cell type of origin [9]. It was observed that the BRCA1-mutated cell lines were enriched for ALDH1+ cells in both normal-like tumor forming (MDA-MB-436) and basal-like tumor forming (HCC1937) cell lines. ALDH1 positivity was thus associated with a BRCA1 defect as reported earlier [20] and we additionally find this association to be independent of tumor sub-type unlike the CD44/24 profile. A similar link was also observed between the CD44 + /24 -/low profile and BRCA1 expression status in basal-like breast cancer cells, as HCC1937 expressed higher CD44 + /24 -/low cells than the HCC1937/wt BRCA1 cell line. Estrogen Receptor and Progesterone Receptor positive MCF7 cell line exhibited the least number of SP, CD44 + /24 -/low and ALDH1+ cells. This could be due to its low invasive capabilities in comparison to the TNBC cell lines [40]. The aggressive TNBCs of basal origin showed a high expression of CD44 + /24 -/low cells with MDA-MB-231 and MDA-MB-436 that formed normal like breast cancers exhibiting higher levels of these putative BCSCs. The existence of 80-90 % CD44 + /24 -/low cells in MDA-MB-231 and MDA-MB-436 adds credence to earlier reports that this population is not exclusively constituted of BCSCs [41]. Further, the number of CD44 + /24 -/low cells was significantly lower in HCC1937 when compared to MDA-MB-436 (p = 0.0001) though both were BRCA1-mutated. Thus, the CD44 + /24 -/low population abundance could be independent of BRCA1 expression and dependent on the tumor sub-type. The use of the three established methods for isolation of CSCs followed by in vitro self renewal analysis by mammosphere formation studies have thus brought out the ALDH1+ phenotype as the best candidate putative stem cell marker for BRCA1-defective breast cancer cells. This is because the ALDH1+ cells from BRCA1-defective cell lines formed large mammospheres in comparison to ALDH1+ cells from BRCA1-competent breast cancer cells. Also, the ALDH1-cells from HCC1937 and MDA-MB-436 cells failed to form spheroids in culture. The ALDH1-cells from MCF7 and HCC1937/ wt BRCA1 possessing a wild type BRCA1 gene formed spheroids though fewer and smaller in size than those formed by the corresponding ALDH1+ cells.
A recent report indicates ALDH1 positivity to be a better marker for relapse than the CD44 + /24 -/low profile [42]. Also, the CD44 + /24 -/low profile has been found to represent the slow cycling, mesenchymal BCSCs while the ALDH1+ cells are the rapidly proliferating epithelial BCSCs [43]. This being the case, we decided to analyze the HCC1937/wt BRCA1 and HCC1937 cells for their mesenchymal and proliferative characteristics. Immunofluorescence based profiling of the spheroids generated from the two cell lines indeed showed a more robust expression of the EMT markers Snail, Slug and Vimentin and mesenchymal marker α-SMA in the HCC1937/wt BRCA1 mammospheres and a higher expression of the stem cell markers Oct 4 and β-catenin in the HCC1937 cell line. Concordantly, we also observed for the first time the existence of a CD44 high /24population within the HCC1937/wt BRCA1 cell line which is absent in HCC1937 cells, possibly responsible for the prominent mesenchymal nature of HCC1937/ wt BRCA1 cells that showed the least in vitro sphere forming ability while the HCC1937 harboring proliferating stem cells formed the maximum number of spheres in culture as shown in Fig. 1d.
Earlier reports from our group have established the ability of PB, a naturally occurring naphthoquinone to induce cytotoxicity specifically and selectively in BRCA1 defective ovarian cancers [21][22][23][24]44]. The C-terminal end of the BRCA1 protein has been shown to be involved in phospho-protein binding thus playing a direct, central role in the functioning of the protein [45,46]. HCC1937 has a 5382insC mutation that leads to the synthesis of a Cterminal truncated BRCA1 protein [47,48]. The silencing of BRCA1 and associated difference in susceptibility of HCC1937/wt BRCA1 to PB show that a mutated BRCT domain 2 at the C-terminus of BRCA1 plays an important role in mediating the specific activity of PB in BRCA1defective cancers. Hence, we suggest that the activity of PB is closely linked to the presence of a functional BRCT domain 2 of BRCA1.
A number of varied targets of plumbagin have been identified over the years [26] that effected cell viability by the induction of G2-M arrest, autophagy, apoptosis as well as inactivation of NK-kB and Bcl2 [27,49]. However, the possible effects of PB in CSCs are still unexplored and have been analyzed in this study. An agent that has a putative ability to target CSCs exhibits a three-pronged effect in vitro by inducing a reduction in number and size of mammospheres as well as directly reducing or abrogating the CSC marker expressing population. Such a decrease in ALDH1+ cells associated with Wnt signaling was observed by Y Li et al., 2010 with sulphorafane treatment in breast cancer cells [50]. In vitro experiments to study the effects of PB on BCSCs gave promising activity where as the standard drug carboplatin caused an increase in the ALDH1+ stem cell numbers post treatment. This observed increase after treatment is concordant with carboplatin being an effective anti-cancer drug capable of killing and clearing off the bulk tumor cells rapidly. This consequently triggers the otherwise quiescent stem cells (ALDH1+ cells) to enter the cell cycle, thus leading to a dramatic increase in the number of ALDH1+ 'stem cells' and possibly accounting for the few incidences of relapse after chemotherapy with carboplatin. This enrichment of ALDH1+ BCSCs in PB treated HCC1937/wt BRCA1 could be explained as a direct effect of membrane localization rather than increased expression of the drug efflux pump ABCG2 (Fig. 4e). However, in HCC1937, PB shows the promising ability to directly reduce the number of ALDH1 + BCSCs possibly by inhibiting the membrane localization of ABCG2, as well as by triggering ROS generation [51,52]. The ROS signaling induced by PB could be through the activation of PI5K-1B protein that then induces ROS production and/ or triggers loss of mitochondrial membrane potential, and apoptosis [23,53]. This further corroborates the specificity of PB to target BRCA1-defective cells and also provides a reason for the BCSC enrichment observed in PB treated HCC1937/wt BRCA1 spheres in which the ROS induction by PB is seen to be effectively nullified and hypoxic condition is maintained. Hypoxia is a well-documented inducer of EMT as well as CSC phenotypes [54][55][56]. We have shown earlier that HMOX1 is about 6.6 fold high in BRCA1 deficient than BRCA1 proficient condition when treated with plumbagin [44]. The sustained high levels of ROS and corresponding high levels of HMOX1, compared to moderate-low levels of ROS and HMOX1 observed in BRCA1 wild type condition, could induce apoptosis in BRCA1-deficient condition instead of survival. Thus, PB which is reported earlier to have a specific cytotoxicity in BRCA1-defective ovarian cancers, is found in this study to possess the ability to target BRCA1defective BCSCs as well (Fig. 6). Recent studies have found the involvement of the Wnt/β-catenin-ABCG2 signaling pathway in chemoresistance and tumor-initiating capacity of ovarian cancer cells [57]. However, none of these have studied the differences in sub-cellular localization of ABCG2.
Immunohistochemical analysis of ovarian cancer xenografts has provided preliminary in vivo evidence for the ability of PB to target CSCs. A reduction in Vimentin (EMT marker), N-cadherin (Mesenchymal marker) as well as Oct 4 (Stem cell marker) expressing cells after PB treatment shows for the first time, the ability of the naphthoquinone to target CSCs in vivo. Studies from our group have earlier shown that Plumbagin can target BRCA1-defective breast (unpublished data) as well as ovarian cancers [21] more effectively than it can target BRCA1-competent cells from both cancers. The studies on OVCAR5 xenografts provide preliminary evidence for the possibility of targeting BCSCs with plumbagin. This could be accomplished with greater selectivity to BRCA1-defective BCSCs if the in vitro effects of PB are reflected in vivo too. This paves way for further xenograft experiments with BRCA1-defective breast cancer cell lines which could bring out the specific and pronounced effects of PB in BRCA1-defective BCSCs as is expected from the in vitro and in vivo data generated from this study.
This study shows a possible role for BRCA1 in determining the EMT and stem cell characteristics of BCSCs and also links BRCA1 and the marker expression profile of BCSCs to the existence of a CD44 high /24sub-population. In support of this, suppression of EMT by BRCA1 and expression of Slug and Snail in response to repression of BRCA1 expression has been reported earlier [58,59].
It has been very well proved that low levels of ROS help CSCs to survive and lead to adaptive changes for tumor progression. Shi et al., 2012 demonstrated that, the ROS generated by xenobiotics tip the redox balance and kill cancer cells while not affecting normal cells [60]. In our study, the ability of PB to induce DSBs (Additional file 5: Figure S4) coupled with its ability to generate ROS in HCC1937 mammospheres (Fig. 4d) leads to a 'double effect' where the increased ROS in the mammospheres stresses the CSCs that are known to thrive in low ROS conditions and the additional burden of DNA DSBs that cannot be repaired due to a lack of functional BRCA1 causes the CSCs to die. This is reflected in the reduction of ALDH1+ BCSC population after PB treatment. On the other hand, PB is unable to induce ROS, DNA DSBs in HCC1937/wt BRCA1 BCSCs which continue to thrive in the lower ROS environments present in the PB treated mammospheres. This leads to the increase in ALDH1+ BCSC numbers after PB treatment. This is the maiden study addressing the ability of PB to target CSCs with selectivity for BCSCs from BRCA1-defective breast cancer cell lines. At the molecular level, ROS induced by PB in the BCSCs may be working partially through the Wnt/β catenin-ABCG2 pathway to make BCSCs more susceptible to PB by preventing its efflux [57,61]. This aspect of the action of PB has to be investigated further.
Thus, future studies aimed at targeted delivery of PB into BRCA1-defective breast cancers could open up new avenues for the complete abrogation of the cancer, including CSCs, aiding relapse-free survival after chemotherapy.
Conclusion
In support of the recalcitrant nature of BRCA1-related cancers to therapy, we show the predominant expression of stem cell markers in mammospheres derived from BRCA1-defective cancer cells in contrast to the EMT markers expressed by BCSCs in a BRCA1-competent condition. Majority of chemotherapeutics abrogate rapidly proliferating bulk tumor cells, while quiescent, therapy resistant CSCs are subsequently enriched. We demonstrate for the first time that a naphthoquinone, PB selectively reduces the ALDH1+ population in basal-like BRCA1defective breast cancer cells while enriching the ALDH1+ population in cancer cells harboring full length, functional BRCA1, the reason for which has to be analyzed in future. Carboplatin on the other hand, caused an enrichment of ALDH1+ cells irrespective of BRCA1 status. PB also induces ROS production-potentially culminating in cell death-exclusively in the mammosphere-derived cells of BRCA1-defective HCC1937. This study gives insights for an effective treatment regimen that can inhibit CSC-induced resistance to therapy and subsequent disease relapse. Fig. 6 Model of BRCA1 defective BCSCs and response to anticancer agents. BRCA1 defective BCSCs are mostly positive for stem cell markers (Nuclear Oct 4 and membrane bound β-catenin) while BRCA1 wild type BCSCs shows expression of EMT markers (Snail, Slug, Vimentin) and mesenchymal marker (α-SMA). PB, an ROS inducer can cause DNA DSBs which cannot be effectively repaired in BRCA1 defective cells leading to apoptosis (unpublished data). Also, very high levels of HMOX1 expression in BRCA1 deficient condition as reported earlier, may cause apoptosis induction. ROS mediated DNA damages will be repaired and low levels of hypoxia created by PB can lead to low HMOX1 induction and further stem cell enrichment in BRCA1 wild type BCSCs. In addition to this, drug efflux will not happen as ABCG2 membrane translocation is hampered in presence of PB in BRCA1 defective mammospheres. Due to the presence of active ABCG2, drug efflux will also be high when treated with PB in BRCA1-wild type BCSCs. The same mechanism may be in action in both the conditions after CP treatment | 9,716 | 2016-05-26T00:00:00.000 | [
"Biology",
"Medicine"
] |
OmicsON – Integration of omics data with molecular networks and statistical procedures
A huge amount of atomized biological data collected in various databases and the need for a description of their relation by theoretical methods causes the development of data integration methods. The omics data analysis by integration of biological knowledge with mathematical procedures implemented in the OmicsON R library is presented in the paper. OmicsON is a tool for the integration of two sets of data: transcriptomics and metabolomics. In the workflow of the library, the functional grouping and statistical analysis are applied. Subgroups among the transcriptomic and metabolomics sets are created based on the biological knowledge stored in Reactome and String databases. It gives the possibility to analyze such sets of data by multivariate statistical procedures like Canonical Correlation Analysis (CCA) or Partial Least Squares (PLS). The integration of metabolomic and transcriptomic data based on the methodology contained in OmicsON helps to easily obtain information on the connection of data from two different sets. This information can significantly help in assessing the relationship between gene expression and metabolite concentrations, which in turn facilitates the biological interpretation of the analyzed process.
Introduction
Technologies of molecular biology providing a big amount of data have given rise to largescale biological datasets. Analysis of such data imposes methodological challenges relating to the complex structure and the size of the data. Driven by high-throughput omics technologies and the computational surge, it enables multi-scale and insightful overviews of cells, organisms, and populations. This approach has had a huge impact on the discovery of next-generation diagnostics, biomarkers, and drugs in the precision medicine era [1].
Systemic exploration of complex interactions in biological systems because of the development of new technologies and analytical methods allows the creation of clinically useful tools [2]. Despite their promise, the translation of these technologies into clinically actionable tools has been slow [3] [4] [5]. This is due to the complexity of the issues, but also with the problems arising from the data standardization, data sharing, storing Omics data appropriately and exploring Omics data [6]. Proper organizing and standardizing data, as well as newly developed methodologies and computational algorithms, lead to comprehensive multi-omics and clinical data integration thus enable insightful interpretation of biological processes [7] [8] [9] [10] [11] [12] [13]. In this paper, data analysis by integrative biological knowledge (from biological databases) with mathematical procedures (multidimensional statistical analysis) developed in the Omic-sON library is presented. The procedure implemented in the OmicsON library is based on a previously developed and published algorithm [14].
Design and implementation
The OmicsON library was written in R language [15] according to the Bioconductor guidelines. Particular steps analysis were described based on the lipidomics and transcriptomic data [16]. The standardization and evaluation, as well as the quality control of the data sets, is beyond the functionality of the OmicsON library. It is assumed that pre-processing data is done before analysis in OmicsON (because of different types of e.g. microarray technologies and data formats).
The transcriptomics and lipidomics analysis based on the OmicsON library follow as on the diagram is presented (Fig 1). The workflow in OmicsON consists of a functional pathway and network analysis, statistical analysis by using Canonical Correlation Analysis and Partial Least Square regression. The significant associations between lipidomic and transcriptomic data are calculated. In this way, the researcher receives a set of information for substantive assessment.
Data input
Input data is a set of transcriptomic and lipidomic data that were previously prepared, i.e. normalized, and on which quality control was carried out. Variable distributions should be multidimensional normal. As input, the library expects two matrices to be stored in a text file (tabdelimited): transcriptional and lipidomic sets (fatty acids). The transcriptomic set in the columns should contain the gene symbols according to the HGNC nomenclature [17] and in the lines the levels of gene expression for the individual samples. In the metabolomics dataset in the columns should be the metabolites identifiers according to the CHEBI nomenclature [18] and in the lines should be quantitative data, eg metabolite concentrations.
Gene mapping and grouping data
Gene mapping is a step in which data are decorated by searching for ontologically related molecules present in Reactome's pathways [19]. Results from the Reactome step are decorated by extra interactions of genes based on knowledge taken from the STRING [20] database. Information from the Reactome and the STRING databases gives the possibility for reducing the size dimension of big gene transcription datasets. Finally, it obtains a broad list of potential interactions to be sifted later by statistical testing with two goals: to minimize potential bias towards well-characterized biochemical pathways and, at the same time, to propose new putative links relevant for the specific study (tissue, conditions).
OmicsON allows the mapping of HGNC, Ensemble, and UniProt gene identifiers to find relationships in paths deposited in Reactome database (data sets in supplementary materials: S1 Table).
Enrichment of information on sets of genes associated with fatty acids can be performed for subsets of fatty acids (grouping data) [21]. Grouping data can be done before or/and after gene mapping. For example, the correlation analysis of many thousands of transcriptomic data (microarrays) among a few (or even several dozens) samples with metabolomics data are difficult to carry out and in many cases impossible based on known statistical techniques. In the case of classical canonical correlation analysis, the use of such a large matrix for correlation with a metabolic set is not feasible. In such cases, it is justified to divide the data into subgroups according to justified criteria.
The division into subsets of fatty acids and genes has not been automated in OmicsON. Anyone can make such a breakdown according to criteria that can be different depending on the analytical approach.
Statistical methods
Subgroups of gene and metabolite sets are analyzed with Canonical Correlation (CCA) [22] and Partial Least Squares (PLS) [23] multivariate procedures. These two mathematical methods can be both used for the same datasets (subgroups) or can be used separately depending on the dimension of the datasets (PLS is free of dataset dimension limits).
Canonical Correlation Analysis (CCA). The canonical analysis is a generalization of multiple regression (regression analysis of many independent variables {X1,. . ., Xk} into one dependent Y) into two sets of variables, i.e. a set of independent variables {X1,. . ., Xk} and a set of dependent variables {Y1,. . ., Ym}. CCA allows you to find the answers to the following questions: • What is the influence of the group of independent variables on the group of effect variables (dependent)? In our case, independent variables {X1,. . ., Xk}-expressions for the group of dependent variables {Y1,. . ., Ym}-lipidomics.
PLOS ONE
• Is, and if so, which of the independent variables {X1,. . ., Xk} explains the possibly highest range of variability in the area of the set of dependent variables {Y1,. . ., Ym}?
• Will the introduction of additional (new) variables independent or dependent on the analyzed sets affect the value of the total variance (increase its value)?
The canonical correlation allows assessing whether variables from one data set can be used to predict another set of data by finding such a linear combination of variables from the first set so that it correlates as strongly as possible with variables from the second set. It boils down to finding the vectors of coefficients (a1, a2. . . ap) and (b1, b2. . . bq) such that the correlation cor(a'X; b'Y) would be as large as possible. Thanks to this, new variables are created, so-called hidden or canonical variables, i.e. u1 = a'X; v1 = b'Y, which explains a significant part of the variability between the analyzed sets.
In the next steps, hidden variables are found, i.e. ui = aiX; vi = biY with the largest correlations cor(ui; vi). The designated hidden variables are a measure of the correlation between the sets {X1,. . ., Xk} and {Y1,. . ., Ym} and their weighted sums.
In the canonical analysis, the selection of canonical variability coefficients for particular variables is crucial in such a way that the two weighted sums (a1X1 + a2X2 +. . . + apXk and b1Y1 + b2Y2 +. . . + bqYm) were maximally correlated. The structure of variables and their specific contribution is reflected precisely by canonical weights. The higher the value (positive or negative) of the scale, the larger the contribution of a given variable to the canonical variable. Canonical weights are often given for standardized variables (mean = 0, standard deviation = 1), which facilitates their evaluation (comparison). These weights can be interpreted as beta coefficients in multiple regression. Correlations between hidden variables are called canonical correlation. A low canonical correlation or lack of it may indicate a wrong model or lack of connections between the analyzed two sets of variables. The interpretation of canonical correlations cannot be interpreted in the same way as Pearson's correlation. In the case of canonical analysis, this is only an auxiliary coefficient showing the extent to which the pair of weighted sums have been linked (correlated).
In the canonical analysis, correlations between canonical variables and variables in each group of variables are also determined. They are the so-called canonical factor loadings. The greater the value of the factor charge, the larger the contribution of the canonical variable. The value of factor charges elevated to the square represents the proportion of the variance of a given variable explained by the canonical variable (interpretation similar to the coefficient of determination). The average calculated for a given canonical variable after all variables give information about how many percent of the variance is explained by the given canonical variable on average. This is called variance on fractional variance deposition on canonical variates.
In a canonical analysis, there is a parameter called redundancy. It is calculated by increasing the canonical correlation to the square and multiplying by the variances calculated for the set of independent variables {X1,. . ., Xk} For example, if the redundancy of the first canonical variable {X1,. . ., Xk} is 0.76, it means that this variable explains 76% of the variation in the set {Y1,. . ., Ym}.
Summing up, the canonical analysis consists in: • finding hidden variables.
• canonical variables that are not correlated with each other, which give information about specific variability in two data sets.
• calculation of canonical weights describing the pure contribution of each variable to the canonical variable.
• calculation of factor loads determining the correlation of each variable with the canonical variable.
• calculation of the redundancy (by calculating the variance) that tells how much the average variance of one set is explained by the given canonical variable by means of the variables of the second set of data.
The classic canonical correlation has many limitations including: • variable distributions should be multidimensional normal.
• variables in collections should not be collinear (data redundancy prevents reliable presentation of results). This dependence allows capturing the correlation matrix.
• the outlying points should be rejected because the canonical analysis is very sensitive to them.
• it is recommended to use at least 20 times as many elements of the sample group as variables for the analysis.
Partial Least Squares regression (PLS). Partial Least Squares Regression (PLS)
, like canonical correlation, is used to estimate the relationship between lipidomic and transcriptomic data. The practical difference between the application of canonical analysis and the partial Least Squares Regression is mainly based on other initial assumptions. No rejection of correlated dependent and independent variables is required, and the number of observations may be smaller than the number of predictors. Therefore, PLS is used in the exploratory analysis to choose convenient predictors and to identify outliers.
The PLS method (often referred to as "latent mapping") combines features of principal component analysis and multiple regression. First, it extracts a set of latent factors that explain as many covariances as possible between independent and dependent variables (a covariance matrix is created). Then, the values of dependent variables are predicted using the decomposition of independent variables. The PLS method is particularly useful when predictors are highly correlated (variables X) or when the number of variables is greater than the number of observations. PLS can be used in situations where the use of traditional multidimensional methods is severely limited. PLS is the least restrictive extension of the linear multiple regression model, which is also the basis for many other multidimensional methods. PLS can be used in situations where the use of traditional multidimensional methods are not effective.
The constraints that are not imposed by the PLS method are: (1) factors reflecting variables from the set Y and X are extracted based on the matrix Y'Y and X'X, respectively, and not based on the matrix of mixed products concerning both the variables from the set Y and from the set X and (2) the number of prognostic functions never exceeds the minimum from the number of variables from the set Y and X. In the case of PLS, prognostic functions are represented by factors determined based on the matrix Y'XX'Y. The number of such prognostic functions that can be determined usually exceeds the maximum from the number of Y and X variables. PLS can be used as a mining analysis tool to select prediction variables (it finds sets of independent variables) and to find outliers before applying classical linear regression.
The purpose of PLS (partial least-squares regression) is to build a linear model in the form Y = XB + E where: Y-matrix of variable relationships with dimensions n (number of cases) per m (number of variables), X-matrix of independent variables with dimensions n (number of cases) per k (number of variables), B-matrix of regression coefficients with dimensions k on m, E is a random component of a model with dimensions such as the Y matrix.
In the PLS procedure, factorial values are calculated as linear combinations of non-convergent variables (predictors) in such a way that there is no correlation between variables of factor values used in the prognostic regression model. An important step in the PLS analysis is to check the significance of latent variables with cross-validation. For example, take a set of response data (dependent variables Y) and predictors (independent variables X), which can be in part significantly correlated with each other. For such data, the matrix of factor values T = XW (W-the matrix of weights) is determined and then the linear regression model Y = TQ + E (Q-matrix of regression coefficients (loads)) can be used. The weight matrix W reflects the covariance structure between independent variables (predictors) and dependent variables (responses) in that it maximizes the covariance between dependent variables and the corresponding factor values. Next, the leastsquares method determines the Y-regression against T for determining the Q matrix with charges (weights) for Y such that Y = TQ + E. When calculating the values of charges, the equation Y = XB + E (B = WQ) is obtained, which serves as a prognostic regression model.
In the PLS procedure, a matrix of factor charges P is also obtained, which allows obtaining factor model X = TP + F (F-unexplained part of X results). The regression of partial least squares is a very efficient analytical method of intra-dimensional data, however, in the case of biological data, it is not always sufficiently effective to draw biological conclusions from the point of view.
The OmicsON library includes a PLS calculation library that enumerates all of the above operations. In addition to basic information on the percentage of explained variations, the procedure provides the values of individual factor matrices, weights and regression coefficients, residues and model fit results necessary to assess the correctness of the predictive model and the correct interpretation of the final results. The cross-validation procedure for prediction of the variable matching error i.e. RMSEP (root mean squared error of prediction), MSEP (mean squared error of prediction) or R2 (coefficient of multiple determination) is applied in the PLS library. The error evaluation allows for the proper selection of variables in the model so that they best describe the analyzed process.
Biological interpretation
Reducing the dimensions of two data sets and obtaining hidden variables (CV in the case of CCA and LV in the case of PLS) allows the observation of relationships between two sets of variables, transcriptomic and metabolic in this case. Relating gene expression with fatty acids pave the way for the biological interpretation of such a dependence. Thanks to this, often unobvious relationships of two sets of variables explain or show the direction of the course of the analyzed biochemical processes.
Results
As an example application of the OmicsON library, sets of fatty acids and gene transcription from a collection of murine nutrigenomics study was used [24]. The population comprises mice nurtured in five different diet regimes, with 40 individuals in total. Based on the analysis of hepatic samples (four biological replicates at three-time points each), it comprises gene expression data of 120 selected genes potentially involved in lipid metabolism and concentrations of 21 fatty acids. This is not a large data set, but sufficient to demonstrate the functionality of the OmicsON library.
The purpose of the analysis was to check (with CCA and PLS) between which variables, i.e. genes (transcripts) versus fatty acids (metabolites) there is an association.
Gene mapping
In the first step, the gene mapping procedure was performed based on knowledge from Reactome [25]. For each class of fatty acids for a given list of small molecule ChEBI identifiers [26], associated reactions in the Reactome database were found. All genes found in this set of reactions define a group of genes initially associated with a given class of fatty acids. In the case where ChEBI's id is presented in ChEBI2Reactome.txt (automatically taken by the OmicsON library from the Reactome database) file then the mapping is taken from that file. When we cannot find that our ChEBI's id is presented in this file, then we search the ChEBI's ontology tree. We use a function which favoring children nodes, the closest child is taken as representative. Of course, the used child node should be presented in the mentioned mapping file. In the case where no children are presented, then closest parent is taken. ChEBI's ids without any representation are not considered under further analysis.
In the next step, the list of genes was decorated by genes taken from the STRING database [27]. A set of genes from the previous step is used to build a query in the STRING database via the STRINGdb Bioconductor library [28]. This group of genes is expanded to include the closest neighbors of these genes found in the gene interaction network to find interacting proteins and associations between genes ( Table 1).
As a result, a subset of genes functionally related to a set of fatty acids was obtained.
Autocorrelation in transcriptomic and lipidomic sets
Very often the number of genes (variables) is much larger than the number of samples (observations). Canonical correlation is sensitive to the number of observations and thus the calculation of CCA is not possible with original sizes sets of data. The transcriptomic (set of genes) and lipidomic (set of fatty acids) data of the sets were checked for internal correlations because performing the canonical correlation requires the removal of correlated predictors (independent, transcriptomic variables-X) and responses (independent lipidomic variables-Y) (supplementary material S1 Data and S2 Data). The correlation threshold is set arbitrarily and is closely related to the data type. Representatives (under the cutoff of internal correlation) were selected for further analysis (in supplement data), while the rest were taken into account only at the stage of biological interpretation (Table 2).
Canonical correlation results
A properly prepared set of data was subjected to canonical correlation analysis, the essence of which is to find such linear combinations of variables in the analyzed sets that the correlation between them is as large as possible. In this case, the strength of the relationship between transcriptomic and lipidomic variables is very high and statistically significant for the first hidden The rest of the manuscript focuses on discussing the first hidden variable. The figure (helio plot) shows which of the lipids are correlated with gene expressions (Fig 2). The positive correlation represents the bars towards the outside of the inner circle, and the negative correlation visualizes the bars facing inside. The height of the bar indicates the height of the correlation coefficient. The higher the bar the higher the correlation, the lower the bar the lower correlation. The analysis results for the first subgroup show that the gene expression: very-low-density lipoprotein receptor (VLDRL) and proteoglycan 4 (PRG4) is positively correlated with docosahexaenoic acid (28125) and an icosapentaenoic acid (28364), as well as negatively correlated with icosatrienoic acid (36036), vaccenic acid (36023) and γ-linolenic acid (28661). The strength of correlation in these cases is significantly higher compared to other fatty acids and expression of genes. The high strength of a relationship is also observed for enoyl-CoA delta isomerase 2 (ECI2) and cyclin-dependent kinase inhibitor 1A (CDKN1A) with some fatty acids. The reduced gene expression level of ECI2 and CDKN1A correlates with high levels of fatty acids, i.e. docosahexaenoic acid (28125) and an icosapentaenoic acid (28364), and low concentrations of vaccenic acid (36023), γ-linolenic acid (28661).
Partial least squares regression results
The partial least squares regression analysis (PLS) as an alternative method of analysis of the canonical correlation was performed. The purpose of this analysis was to find a linear combination of variables from the transcriptomic and lipidomic data sets with the regression method to find the relationships (contributions) of the individual variables with each other. The PLS procedure also allows grouping data sets taking into account the contribution of variables to the description of the model. Table 2. List of genes and fatty acids after turning off correlated data within individual sets. R correlation coefficient cut-offs were assumed arbitrarily. It was for genes: r = 0.6 and for fatty acids: r = 0.7.
Genes symbols (HGNC) lipids ID (CHEBI)
The biological interpretation of the effect of all fatty acids on all genes is difficult from an interpretative point of view, hence the set obtained after the functional analysis was included in the PLS procedure. Several "percentage variances explained" for latent variables (LV) were calculated based on the given data sets. The percentage of explained variation allows assessing whether given sets of data satisfactorily describe relationships between variables. For a ten latent variables the following percentage explained variabilities were: LV1 = 32.57%, LV2 = 51.55%, LV3 = 63.77%, LV4 = 68.84%, LV5 = 75.63%, LV6 = 78.32%, LV7 = 80.91%, LV8 = 83.10%, LV9 = 84.79%, LV10 = 86.00%. The second latent variable will explain over 50% of the variability of the analyzed data sets, and the sixth close to 80%.
In addition to "percentage explained variation", several other statistics were calculated, including regression coefficients, scores, loadings, loading.weight, Yscores, Yloadings, projection. They allow assessing the contribution (weight) of individual transcriptomic and lipidomic variables to the model. Having information about the input of individual variables, it can be build models used for prediction. It enables the selection of variables based on the determination of the prediction error e.g. root mean squared error of prediction (RMSEP) and the selection of a latent variable for interpretation in the case of a specific variable. As an example, the results of RMSEP for fatty acids are presented (Fig 3). The interpretation of the contribution of individual fatty acid variables can be interpreted based on latent variables for which the prediction error is the smallest. The results obtained show that at the level of the first latent variable, the largest contribution to the linear model of the relationship of fatty acids and genes have fatty acids with the CHEBI identifiers: 61204, 36036, 28661, 32325, 73705, 28842, 28364, 28125 (Fig 3). The RMSEP calculated for them is the smallest.
Discussion
The OmicsON library is a tool for the integration of transcriptomic and metabolomic data. The procedures implemented in the library allow for gene mapping based on databases, i.e. Reactom and String, and for statistical analyzes, i.e. canonical analysis (CCA) and Partial least squares regression (PLS). This approach provides information on the relationship between the transcription level of specific genes and the level of metabolites. Correlating this information can significantly help in the interpretation of biological processes analyzed.
The OmicsON library has many advantages, but also several limitations, both from biological data and statistical methods. The results obtained based on the OmicsON library could be almost fully trusted when you could be sure that the data sets, such as Reactome or String are complete. Meanwhile, it is known that these databases are constantly replenished, but they still do not fully provide a complete description of the relationship between molecules. For some organisms, there are more descriptions and relatively few for others. The library gives satisfactory results in the case of: • studying the data of well-annotated species.
• access to reliable molecular networks to achieve a functional relationship between individuals. • a well-functioning internet network, as some procedures require communication with external databases • grouping of metabolomic and transcriptomic data in such a way that canonical correlation (CCA) can be used since CCA cannot be used for a large number of observations and a small number of test samples.
Another disadvantage that can be encountered is the problem with the availability of two sets of data (e.g. transcriptomic and metabolomic) from one organism. Available public repositories collect and provide data sets usually for one type of data, e.g. only transcriptomic or only metabolomic data. To demonstrate all the functionalities of the OmicsON library, the authors used (the only data set available at that time) that concerned the mouse. The results of calculations were demonstrated based on Reactom and String datasets for humans (without narrowing to specific tissues) because the most complete data set is currently for human. This is an inaccuracy, but we would like to emphasize that the goal was to demonstrate the full functionality and capabilities of the OmicsON library and not a detailed interpretation of concrete the biological processes.
In omics-based studies, small support of biological samples is often observed. This is associated with the high costs of wet laboratory and sequencing as well as the difficult availability of clinical samples. Researchers address this problem by selecting appropriate computational methods. Classical statistics methods are not efficient in such cases, therefore the research challenge is to use appropriate extensions or combinations of these mathematical tools. This opens new possibilities for an analysis of a large data sets with a small number of observations.
One of the most effective ways to achieve improved results is to combine basic statistical techniques (also called gold standard methods) with innovative approaches like regulated canonical analysis (rCCA) [29] or regularization and grouping data by GCA method [30] [31]. This ensures effective analysis of data in which the number of variables significantly exceeds the number of observations [29,32] [12].
In the future, the OmicsON will be improved through the implementation of univariate models, including multiple testing correction strategies, dimension reduction techniques [30], and variable selection models [30] [33].
Supporting information S1 Table. List of genes and lipids. CleanData_06_08-Full experiment data with correlation's threshold 0.6 for X and 0.8 for Y, CleanData_06_07-Full experiment data with correlation's threshold 0.6 for X and 0.7 for Y, Reactome_Ensemble_06_07-Experiment data cutoff to Reactome pathways base on Ensemble IDs with correlation's threshold 0.6 for X and 0.7 for Y, Reactome_UniProt_08_09-Experiment data cutoff to Reactome pathways base on Ensemble IDs with correlation's threshold 0.8 for X and 0.9 for Y, String_Expand_Ensemble_06_07-Experiment data cutoff to String relations base on Ensemble IDs with correlation's threshold 0.6 for X and 0. | 6,472.2 | 2020-07-29T00:00:00.000 | [
"Computer Science"
] |
Acoustic Conditions Analysis of a Multi-Sensor Network for the Adaptation of the Anomalous Noise Event Detector †
The DYNAMAP project is aimed at implementing a dynamic noise mapping system able to determine the acoustic impact of road infrastructures in real time. It was established as a result of the European Noise Directive 2002/49/EC. A Wireless Acoustic Sensor Network (WASN) is used to collect the measurements in two pilot areas: in the city of Milan (urban) and on the A90 motorway around Rome (suburban). For a proper evaluation of the noise level of the road infrastructures, the anomalous noise events (ANE) unrelated to traffic noise (e.g., sirens, horns, speech, doors, etc.) should be removed before updating the noise maps. For this purpose, an anomalous noise events detector (ANED) was initially designed and trained using data from a real-life recording campaign. In this work, we continue adapting the ANED algorithm to conform to the requirements of the final 25-node hybrid WASN deployed in the suburban environment. Specifically, the study focuses on the analysis of the spectro-temporal characteristics of the acoustic data in real-operation conditions and their differences between the 19 high-capacity nodes in the Rome pilot area in order to adapt the ANED to run on the entire WASN appropriately.
Introduction
The number of people living in urban and suburban areas is increasing year after year [1].As a consequence, the quality of life of citizens is being negatively affected by the increase in Road Traffic Noise (RTN) levels [2], which has been identified as one of the main sources of health-related problems [3,4].At the European level, the prevention and reduction of environmental noise is being addressed by competent authorities based on the European Noise Directive 2002/49/EC (END) [5], and the consequent strategic noise mapping assessment CNOSSOS-EU [6] has to be implemented by all European member states by the end of 2018.The different pillars of these regulations are: (i) determining noise exposure, (ii) informing the affected citizens, and (iii) preventing and reducing the environmental noise where necessary.
Moreover, the END requires the member states to tailor and publish noise maps together with the corresponding action plans every five years for large agglomerations (with +100,000 inhabitants) and other major infrastructures.This requirement has been (and is still being) mainly addressed by experts that collect representative acoustic data using certified sound level devices, which is subsequently fed into the corresponding noise tailoring software [7].Nevertheless, the emergence of Wireless Acoustic Sensor Networks (WASNs) provides the possibility of monitoring environmental noise pervasively in urban and suburban environments [8,9].Among these WASN-projects, the LIFE+ DYNAMAP project [10] is aimed at developing a dynamic noise mapping system able to detect and represent the acoustic impact of road infrastructures in real time.Two hybrid low-cost WASNs (including high-capacity and low-capacity nodes) have recently been deployed in the two pilot areas of the project [11,12]: the first one, within District 9 of the city of Milan, and the second one along the A90 motorway surrounding Rome, corresponding to an urban and a suburban environment, respectively.For a proper evaluation of the noise level of the road infrastructure, the anomalous noise events (ANE) unrelated to regular traffic noise (e.g., sirens, horns, speech, doors, music, etc.) should be automatically removed from the equivalent noise level computation to tailor reliable RTN maps.To that effect, an Anomalous Noise Event Detector (ANED) has been designed and implemented to run on the 19-node high-capacity low-cost acoustic sensors of the WASN [13,14].
After the original design of the ANED based on a synthetic audio database [15], the core algorithm was improved and trained with audio data from a recording campaign conducted across different representative locations of the pilot areas [13].Specifically, Rome's suburban acoustic environment was sampled through a subset of the 19 high-capacity sites of the A90 Rome motorway during short periods of time (e.g., 20-30 min) throughout the daytime [16].Despite the collected audio coming from a real-life environment, it was impossible to take into account all of the factors that would affect the WASN during the real operation; i.e., working 24 hours/day, and during days of different traffic activity (e.g., weekdays and weekends).Furthermore, the measurements were performed using a tripod placed over the highway portal floors with an oblique orientation during the recording campaign [16], which differs from the final position of the low-cost sensors in the WASN.
In Reference [17], the first steps towards adapting the ANED to run in the final operational conditions of the WASN were presented, after deploying the low-cost high-capacity nodes in the Rome pilot.That work covered two main stages: the first was focused on the validation of the performance of the preliminary ANED version on the real-life operation data, and the second focused on the process followed to collect new data from the sensors of the deployed 19-node WASN.After the manual labelling of a representative subset of the recorded data, a significantly larger and richer dataset was obtained than the one obtained from the recording campaign (e.g., new ANE categories were included), the new one being composed of around 107 h of audio.The results in Reference [17] did not give suitable results in terms of accuracy, therefore, in this paper, we present the next step consisting of a subsequent analysis carried out on the 107 h of real-operation audio data.The final goal of this study is to analyze the spectro-temporal patterns of the noise received by each sensor in order to determine to what extent it is necessary to adapt the ANED to run on all the nodes of the WASN appropriately.
The rest of this paper is structured as follows.In Section 2, a detailed description of the operating conditions of the Rome pilot area is given.After that, in Section 3, a spectro-temporal analysis is conducted on the audio data obtained for each high-capacity node of the network.Finally, Section 4 presents the conclusions derived from this research.
Description of the Operating Conditions in the Rome Pilot
The WASN of the suburban area of the Rome ring road is comprised of 24 acoustic sensors, five of which are low-capacity sensors without enough computational resources to allocate to the ANED.The locations of the 19 high-capacity sensors of the WASN in the Rome suburban pilot area are shown in Figure 1; in red are those that were also used during the initial recording campaign.These are the locations where the recordings were conducted to provide the acoustic data used to re-train and validate the ANED performance in the final locations of the sensors.
The system must be robust and stable as it operates 24 hours a day 7 days a week.Thus, the ANED training must include, as far as possible, all kinds of real-life situations.This is a very challenging task, which is difficult to accomplish especially if the amount of available resources is limited, e.g., the processing and storage capabilities.Our approach to this problem was to plan one-day real-operation recordings of all WASN sensors, selecting a weekday and a weekend day to provide diverse traffic conditions i.e., Thursday the 2nd and Sunday the 5th of November, 2017.From each sensor, 20 min was recorded every hour.Figure 2 shows a schematic diagram of the recording days and the recording duration.Sixteen hours were taken into consideration for each location, having a diversity belonging to a workday and a weekend and a representation for each hour of the day.After the performed labelling, these data could be used to properly re-train the ANED and improve its accuracy.
Acoustic Spectral Analysis of the Nodes of the WASN
In this section, we analyze the spectral distributions for each sensor and for each hour using the available audio recordings.The analysis was performed using the Gammatone filterbank as a spectral representation because of its high coherence with mammal sensitivity.It was implemented following the approach of the Gammatone Cepstral Coefficients (GTCC) which was proposed in Reference [18].Acoustic signal frames of 30 ms were selected using a Hamming window [19], and 48 subband energies were obtained between 20 Hz and 22 kHz.Using the 20 min audio file of a given sensor, a mean spectrum was computed, and the 24 mean spectra were compiled to define the spectrum-time profile of this sensor during the day.
The spectrum-time profiles of the 19 sensors of the WASN for the weekday are shown in Figure 3.All sensor profiles follow a gross similar pattern, which can be basically attributed to road traffic noise (the major class): higher noise levels are obtained in the lower frequency band, and higher amplitudes are observed in the time region of higher traffic density (between 6:00 and 21:00).Nevertheless, it can be seen that there are some fine variations between sensor profiles, which informs about the diversity as regards traffic conditions and noise events in these specific locations.
In Figure 4, the spectrum-time profiles are shown for a weekend day, where sensor hb119 was active only between 00:00 and 13:00 (the rest of the values in its profile have been fixed to 0).In this case, high energy values can be observed at 14:00, coinciding with the sound of rain.Furthermore, by comparing this figure with the previous (Figure 3), several similarities can be observed.Sensor hb112 and hb134 show low energy values at high frequencies in both Figures; the reader may observe that both are located very near to one another as shown in Figure 1.However, sensor hb119 is also located near to these two but its high frequencies present higher energy values on both dates.On the other hand, sensors hb103, hb111, hb143, hb156, and hb157 show high energies at high frequencies on both days and are all located in the south-west part of the Rome ring road.In the northern part of the ring road, we have sensors hb104, hb141, and hb153, however, the energy profiles are only similar for hb104 and hb141.In the eastern part of the ring road, hb112 and hb119 are also located near to one another, and both present a similar spectrum-time pattern, the hb119 sensor having a higher energy overall.As can be seen, most sensors located near to each other present a similar energy profile or pattern, however, a more detailed analysis should be conducted to better characterize and quantify the differences between the spectrum-time profiles.In Figure 5, the spectrum-time profiles of one sensor (hb103) are shown for the two analyzed days in order to better explore the inter-week differences.As can be observed, the mean monitored road-traffic noise in a certain location presents significant differences when a weekday is compared with a weekend day, e.g., the initiation time of traffic noise in the morning and its drop off in the evening show a softer behavior during the weekend.
In order to better explore all the available data, more analyses should be conducted to conclude to what extent the differences of the observed spectral-time profiles are connected, e.g., performing clustering of sensors to train the ANED differently for each group.
Conclusions
In this study, the various recording locations had different acoustic profiles, having an effect on the sound levels as well as the ANED performance.For this reason, it is of great importance to study the acoustic conditions of the sensors which make up the sensor network.In this paper, the acoustic profiles of the high-capacity nodes from the Rome WASN are described in detail, in order to explore the differences among them.In addition, the difference between two different days are analyzed.A weekday and a weekend day were selected, data from 20 min for every hour of these days was analyzed.The last phase of the DYNAMAP project involves the authors conducting fine tuning of the acoustic sensors to improve the individual performance of each node in the network.This study is focused on analyzing the acoustic profile of the nodes in an attempt to acsertain the sensor network uniformity and to use this information in the next stage of training of the anomalous noise event detector.Future lines of investigation will include measuring the acoustic differences of each sensor and comparing these metrics with the event detection accuracy.
Figure 1 .Figure 2 .
Figure 1.Map with sensor location information within the Wireless Acoustic Sensor Network (WASN) of DYNAMAP project.
Figure 3 .
Figure 3. Spectrum-time profiles of the 19 sensors of the WASN on a weekday (2 November 2017).
Figure 4 .
Figure 4. Spectrum-time profiles of the 19 sensors of the WASN during the weekend (5 November 2017). | 2,864.4 | 2019-02-15T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Microglia–Neutrophil Interactions Drive Dry AMD-like Pathology in a Mouse Model
In dry age-related macular degeneration (AMD), inflammation plays a key role in disease pathogenesis. Innate immune cells such as microglia and neutrophils infiltrate the sub-retinal space (SRS) to induce chronic inflammation and AMD progression. But a major gap in our understanding is how these cells interact with each other in AMD. Here, we report a novel concept of how dynamic interactions between microglia and neutrophils contribute to AMD pathology. Using well-characterized genetically engineered mouse models as tools, we show that in the diseased state, retinal pigmented epithelial (RPE) cells trigger pro-inflammatory (M1) transition in microglia with diminished expression of the homeostatic marker, CX3CR1. Activated microglia localize to the SRS and regulate local neutrophil function, triggering their activation and thereby inducing early RPE changes. Ligand receptor (LR)-loop analysis and cell culture studies revealed that M1 microglia also induce the expression of neutrophil adhesion mediators (integrin β1/α4) through their interaction with CD14 on microglia. Furthermore, microglia-induced neutrophil activation and subsequent neutrophil-mediated RPE alterations were mitigated by inhibiting Akt2 in microglia. These results suggest that the Akt2 pathway in microglia drives M1 microglia-mediated neutrophil activation, thereby triggering early RPE degeneration and is a novel therapeutic target for early AMD, a stage without treatment options.
Introduction
Inflammation plays an important role in maintaining tissue homeostasis [1]. However, dysregulation of the inflammatory response is associated with tissue damage and the onset of several aging diseases, including age-related macular degeneration (AMD) [2][3][4], the leading cause of blindness in the elderly [5]. Dry AMD is the major form of the disease and unfortunately to date, is without effective treatment [6]. The retinal pigmented epithelial (RPE) cells, which are the first cells to be affected in dry AMD [7], are subjected to low-grade protective (para) inflammatory changes with aging thereby activating resident immune cells such as microglia [8][9][10], which act to maintain retinal homeostasis [10]. However, environmental risk factors and genetic predispositions can initiate an unregulated inflammatory response during aging, thereby triggering detrimental (chronic) inflammation in the retina [2][3][4]. Chronic inflammation is associated with a prolonged heightened immune response, breakdown of the blood-retinal barrier, activation of complement factors, and migration of microglia, monocytes, and neutrophils to the sub-retinal space (SRS)-a region now thought to be critical for the para to chronic inflammatory transition in AMD pathogenesis [11][12][13][14][15][16][17].
In the SRS, the RPE cells play a key role in the activation and infiltration of the innate immune cells including microglia, neutrophils, and monocytes, key immune cell types that have previously been shown to be important for AMD progression [12,14,17]. Microglia can have both protective and detrimental functions in the retina during retinal homeostasis as well as degeneration, which can be governed by specific cues from the RPE [11,12,15,18]. In addition, we have shown that RPE cells trigger neutrophil activation and infiltration into the retina [17] of a mouse model of dry AMD which lacks the Cryba1 gene (encodes βA3/A1-crystallin) specifically in the RPE (conditional knockout; cKO) [19,20] and in human dry AMD patients [16,17]. Moreover, neutrophil homing into the retina is associated with retinal degeneration as seen in AMD [17]. A major gap in our understanding thus far is how these immune cells interact with one another in the SRS during AMD pathogenesis to potentiate chronic inflammation and retinal degeneration.
We speculate that during AMD pathogenesis, activated microglia interact with infiltrating neutrophils in the SRS to aggravate retinal inflammation and trigger retinal degeneration. Understanding the molecular pathways that regulate this dynamic interaction between a tissue resident immune cell (microglia) and a peripheral immune cell (neutrophil) and the impact these interactions have on the retina during disease progression will stimulate future drug discoveries for this debilitating disease.
Herein, we demonstrate that this dynamic interaction between microglia and neutrophils is important in the AMD pathogenesis in mouse models. We show that RPE-derived soluble factors trigger microglial activation and pro-inflammatory (M1) transition in the diseased state due to Akt2 activation in microglia. M1 microglia activate neutrophils as suggested by neutrophil extracellular traps (NET) formation, along with increased lipocalin-2 (LCN-2) and myeloperoxidase (MPO) levels. Activated neutrophils, in turn, induce early RPE morphological alterations. Further, the pro-inflammatory microglia also upregulate adhesion factors such as integrins β1 and α4 on neutrophils, which are critical for transmigration into the tissue, via activation of CD14 (microglia)/integrin β1 and α4 (neutrophils) interactions between the two immune cells. Intriguingly, targeting Akt2 in microglia with a specific inhibitor reduced the pro-inflammatory transition in microglial cells and subsequently reduced neutrophil activation in vitro, as well as neutrophil mediated RPE alterations in vivo, which suggests that this pathway is a novel therapeutic target for early, dry AMD.
Antibodies
Antibodies for flow cytometry which were purchased from BD Bioscience, Franklin
Animals
All animal studies were conducted in accordance with the Guide for the Care and Use of Animals (National Academy Press) and were approved by the University of Pittsburgh Animal Care and Use Committee (Protocol # 20108281). Both male and female mice were used for this study [17,21]. RPE-specific Akt2 KI were generated as described previously [22]. The RPE-specific Akt2 KI mice were generated by Cyagen. Briefly, the T2A sequence followed by Akt2 coding sequence (CDS) was inserted between the last exon and the 3 untranslated region (3 UTR) of the mouse Best1 gene, which in the eye is specifically expressed in the RPE. The Neo cassette flanked with self-deletion anchors (SDA) was inserted in the intron area between exons 11 and 12 of the mouse Best1 for germ cell deletion of the gene. βA3/A1-crystallin conditional (Cryba1 cKO) [17,19,20] and complete knockout (Cryba1 KO) [16,21] mice were also generated and maintained as previously described [16,17,[19][20][21][22]. Nonobese diabetic/severe combined immunodeficiency (NOD-SCID) mice (NOD.CB17-Prkdescid/J; 5 weeks old) were purchased from The Jackson Laboratory, USA. All mice used in this study were RD8 negative.
RPE Explant Culture
RPE explants from 10 month old WT, Cryba1 KO and Akt2 KI mice were cultured as explained previously [23]. Briefly, fresh eyes were enucleated, and the anterior segment was removed. The neural retina was carefully removed and then the posterior eye cup was dissected into four petals. The resulting RPE-choroid-sclera (RCS) complexes were flattened onto polyvinylidene difluoride (PVDF) membranes with the RPE cells facing up and cultured as previously described [17]. The RPE spent medium (RPESM) was harvested after 24 h by carefully removing the media for further experiments [17].
Microglia Culture
Mouse microglial cells were purchased from ScienCell, Carlsbad, CA, USA (Cat# M1900-57) and were cultured according to the manufacturer's protocol. For co-culture experiments only, microglia were plated (1 × 10 5 cells/mL) on the underside of the insert of transwell plates (Cat# COR-3460, Corning, Glendale, AZ, USA). At 80% confluency, the microglial cells were exposed to RPESM from WT, Cryba1 KO, and Akt2 RPE explant cultures at a dilution of 1:1 (RPESM: microglia media) for 24 h. Akt2 inhibitor (CCT128930: Selleckchem, Houston, TX, USA; Cat# S2635) treatment at a dose of 5 nM or vehicle only (2.5% dimethyl sulfoxide; DMSO in phosphate-buffered saline; PBS) was given at the time of RPESM exposure for 24 h. CD14 neutralization was performed 2 h before the start of the neutrophil co-culture using a neutralizing antibody against CD14 (Thermo Fisher, USA; Cat# 14-0149-82) at a dose of 1 µg/mL and was treated until end of the experimental duration.
Neutrophil Culture
Neutrophils from WT mice were isolated as previously described [17]. Briefly, bone marrow cells were isolated from femurs and tibias and purified over a Percoll discontinuous density gradient. After isolation, neutrophils were resuspended at a density of 10 × 10 6 cells/mL in Ca 2+ and Mg 2+ free Hank's balanced salt solution (HBSS), supplemented with 20 mM HEPES and then co-cultured with microglial cells for 4 h at 3 × 10 4 cells/mL (neutrophil: microglia = 1:10), as explained previously with minor modifications [24]. Briefly, transwell plates with microglia (from different experimental groups) were exposed to WT neutrophils on the apical surface for 4 h. Neutrophil viability was measured by trypan blue dye exclusion after 4 h of incubation.
Immunophenotyping of SRS
The SRS (including RPE-choroid) was dissected following perfusion to remove immune cells from choroid blood vessels as described previously [23]. The tissues were then digested with 0.05% collagenase D (Roche, Switzerland, Cat# 11088858001) at 37 • C for 30 min, teased out with a blunt end forceps and pipetted 10 times to release the cells. The cell suspension was then passed through a 70 µm cell strainer, and centrifuged at 1300× g at 4 • C for 20 min. The entire pellet was first blocked with Fc blocker and 2% each of rat and mouse serum in 5% bovine serum albumin (BSA) in PBS for 30 min at room temperature. Dead cells were then removed from the cell suspension using the cell debris removal solution (Miltenyi Biotec, Gaithersburg, MD, USA, Cat# 130-109-398). Flow cytometry was used to assess the % neutrophils in the parent cell population: CD45 high CD11b + cells (see Supplementary Figure S1) and microglia, after staining with anti-Ly6G, Ly6C, CD11b, and CD45, CX3CR1 antibodies at a concentration of 1 µg/mL for 90 min at room temperature, as previously described [17].
Evaluation of Cell Surface Receptor Expression
Freshly cultured bone marrow-derived neutrophils from WT mice co-cultured with microglia from the different experimental conditions (as outlined above) were incubated with Alexa fluor 488-tagged Integrin β1 (Santa Cruz Biotechnology, USA) antibodies or Alexa fluor 568-tagged Integrin α4 (Cell Signaling Technology, USA) at a concentration of 1 µg/mL in PBS containing 1% BSA for 1 h [17]. The cell surface expression of integrin β1 (FITC-A Mean fluorescence) and integrin α4 (PE-A Mean fluorescence) was evaluated among these cells using BD Fortessa flow cytometer (BD Biosciences, Franklin Lakes, NJ, USA). Results were analyzed as described previously using FlowJo software [17].
Cytometry Bead Array (CBA)
Microglia spent medium and RPE lysates from different the experimental groups were used to ascertain the levels of cytokines and chemokines using the Legendplex mouse proinflammatory chemokine panel (Biolegend, USA; Cat# 740451) and Legendplex mouse inflammation panel (Biolegend, USA; Cat# 740446). The experiments were performed, and the results were analyzed using the manufacturer's protocols.
Single Cell (sc) RNA Sequencing and Bioinformatics
The SRS (including RPE-choroid) was dissected following perfusion to remove immune cells from choroid blood vessels as described previously [23] from 15 month old Cryba1-floxed and cKO mice. The single cell suspension was subjected to scRNAseq as a paid service from the Genomics Research Core of University of Pittsburgh to identify the RNA expression profile of different cells. Bioinformatics analysis was performed by creating Seurat objects for each sample by the function "CreateSeuratObject" in Seurat package and then filtering out cells with nFeature_RNA > 8000, nFeature_RNA < 250, nUMI > 40,000, nUMI < 500, log10 (GenesPerUMI) < 0.8, or with a mitochondrial rate > 20%. Scrublet was used to remove predicted doublets with default parameters [23]. Then genes detected in fewer than 5 cells were also removed. As a result, the expression of 17,626 genes in 10,134 cells were used for downstream analysis. After clustering the cells, cell type identities were assigned based on the top marker genes of each cluster as well as visualizing the expression of specific marker genes of candidate cell types [23]. Average expression values of the genes in each cell type and each sample were then calculated by the function "Aver-ageExpression" in Seurat package. Differential expression analysis was performed on each cell type of interest between Cryba1-floxed and cKO samples by the function FindMarkers with test.use = "wilcox" in Seurat package [23].
For the analysis of ligand-receptor interactions between cells, the default built-in ligand-receptor interaction database, intracellular signaling database and gene regulatory networks in the package LRLoop were used [25]. The ligand-receptor pairs and intracellular signaling interactions with a detection rate less than 2.5% in the corresponding cell types were removed [25]. Analysis of the remaining candidate ligand-receptor pairs between each pair of cell types of interest in each sample was performed using the LRLoop package with its standard pipeline and default parameters [23].
Subretinal Injection
NOD-SCID mice (Males, 5 weeks-old) were used for the study [17,23]. Mice were anaesthetized by intraperitoneal injection of a mixture of ketamine (Covetrus, Dublin, OH, USA; Cat# 071069) at a dose of 50 mg/kg body weight and xylazine (Sigma Aldrich, Cat# X1251) at a dose of 10 mg/kg body weight, and sub-retinal injections [17,23] of neutrophils (1 µL; at a concentration of 5 × 10 6 cells/mL) from different experimental groups (co-cultured with microglia exposed to RPESM from WT, Cryba1 KO and Akt2 KI RPE explant cultures) were given to each mouse. Three days after the injections, the NOD-SCID mice were euthanized, and the eyes were enucleated for RPE flat mount preparations [17,21].
RPE Flatmount and Phalloidin Staining
Eyes were freshly enucleated and then fixed in 2% paraformaldehyde (PFA) for 10 min followed by the removal of the anterior segment (cornea, lens, and attached iris pigmented epithelium). The resulting posterior eyecups were fixed in 2% PFA for 1 h at room temperature for RPE flat mount preparation [17,21]. Tissues were removed after the eyecup was quartered into a petaloid structure and the neural retina was carefully removed. The resulting eyecup was further cut radially into eight pieces from the optic nerve head to the periphery. Immunostaining on RPE flatmounts was performed by using Alexafluor 488-conjugated phalloidin (1:1000) (Thermo Fisher, USA; Cat# A12379) with 1 µg/mL DAPI (Thermo Fisher, USA; Cat# 62248) and incubated at room temperature for 1 h [17,21]. The flatmounts were washed six times with 1× tris-buffered saline (TBS), mounted on cover slips with DAKO mounting agent (Agilent, Santa Clara, CA, USA, Cat# S3023), and then visualized under a confocal microscope (Zeiss LSM710, Oberkochen, Germany) to assess RPE morphological changes [17,21].
Retinal Cryosections and Immunostaining
Whole eyes from freshly dissected mice were enucleated and fixed in 2% paraformaldehyde (PFA) for 10 min and then the anterior parts were removed. The posterior eyecups were again fixed in 2% PFA for 1 h at room temperature. Immunofluorescence was performed on frozen sections from the posterior eyecups as explained previously [16,21]. The sections were incubated with phosphate-buffered saline, containing 5% normal donkey or goat serum, for 30 min and then incubated overnight at 4 • C with primary antibodies for Iba1 (Wako, Japan; Cat# 019-19741) or Ly6G (Biorbyt, St. Louis, MO, USA; Cat# orb322983) diluted to 1:100. The sections were washed with TBS and then incubated at room temperature with respective secondary antibodies with 1 µg/mL DAPI (Thermo Fisher, USA; Cat# 62248) and +/− Alexafluor 488-conjugated phalloidin (1:1000) (Thermo Fisher, USA; Cat# A12379). Sections were again washed with TBS and then mounted using DAKO mounting agent (Agilent, USA, Cat# S3023). Images were acquired by a Zeiss LSM 710 confocal workstation.
Statistical Analysis
Graphs were generated using Microsoft Excel and GraphPad 8.0 software [17,21,23]. Statistical analyses were done using one-way ANOVA followed by Tukey post hoc test to measure the differences between groups [17,21,23]. The significance was set at p < 0.05. Each biological replicate had at least three technical replicates. Results are presented as mean ± standard deviation (SD) [17,21,23].
Microglia and Neutrophils Infiltrate the SRS in a Mouse Model upon Activation of Retinal Inflammation
We have previously shown that in the Cryba1 cKO mouse model with a dry AMD-like phenotype [19], microglia are activated [20], and neutrophils infiltrate the SRS with age [17]. To assess the spatiotemporal localization of the activated microglia in aged Cryba1 cKO mice, we performed immunofluorescence studies on retinal cryo-sections and found that microglia (Iba1-positive cells) home into the SRS of aged (9 months old; the age when the AMD-like phenotype is observed in this mouse model) Cryba1 cKO retina, but not in age-matched floxed (Cryba1 fl/fl ) controls ( Figure 1a). We also found accumulation of microglia (Iba1-positive cells) and neutrophils (Ly6G-positive cells) (Figure 1b,c) in the SRS of aged (9 months old) Akt2 KI mice, a genetically engineered mouse model with constitutive activation of Akt2 specifically in the RPE cells [22]. Akt2 is a kinase which has been previously shown to be critical for inflammation activation [26,27] and is upregulated in RPE cells from human AMD patients as well as in the Cryba1 cKO mouse model [16,17]. Since Iba1 is expressed in both microglia and macrophages/monocytes [11,15], these Iba1+ cells could be either microglia or macrophage lineage cells. In addition to microglia, macrophages/monocytes are also known to infiltrate the SRS in animal models and human AMD patients [11,13,15].
To substantiate the sub-retinal accumulation of neutrophils and the presence/activation of microglia or macrophages in these mouse models, we performed immunophenotyping for multiple cell surface markers to differentiate the various immune cells in the sub-retinal tissue (including RPE/choroid) by flow cytometry [17], finding significant elevation in the number of neutrophils in 10 month old Cryba1 cKO and Akt2 KI mice (Figure 2a and Figure S1). Interestingly, we also found a decline in the expression of the homeostatic marker, CX3CR1 among the microglial cells (Figure 2b,c and Figure S1). However, no significant increase was observed in the number of monocytes or macrophages in the Cryba1 cKO and Akt2 KI mice, compared to controls (Figure 2b,c and Figure S1). CX3CR1 has been previously demonstrated to be associated with AMD, both in animal models and human AMD patients [11,15]. Loss of function of CX3CR1 has been shown to be critical in peripheral immune cell infiltration into the retina as well as in the induction of chronic inflammation and AMD-like pathology in mouse models [11,15]. It is likely that in our mouse models, the RPE-mediated activation of the proinflammatory niche in the SRS is the cause of immune cell infiltration. To substantiate the sub-retinal accumulation of neutrophils and the presence/activation of microglia or macrophages in these mouse models, we performed immunophenotyping for multiple cell surface markers to differentiate the various immune cells in the sub-retinal tissue (including RPE/choroid) by flow cytometry [17], finding significant elevation in the number of neutrophils in 10 month old Cryba1 cKO and Akt2 KI mice (Figures 2a and S1). Interestingly, we also found a decline in the expression of the homeostatic marker, CX3CR1 among the microglial cells (Figures 2b,c and S1). However, no significant increase was observed in the number of monocytes or macrophages in the Cryba1 cKO and Akt2 KI mice, compared to controls (Figures 2b,c and S1). CX3CR1 has been previously demonstrated to be associated with AMD, both in animal models and human AMD patients [11,15]. Loss of function of CX3CR1 has been shown to be critical in peripheral immune cell infiltration into the retina as well as in the induction of chronic inflammation and AMD-like pathology in mouse models [11,15]. It is likely that in our mouse models, the RPE-mediated activation of the proinflammatory niche in the SRS is the cause of immune cell infiltration. showing diminished expression of the protein in microglia (CD45 Low CD11b + Ly6C -Ly6G -) from the SRS of 10-month-old Cryba1 cKO and Akt2 KI mice, compared to controls. n = 3. ** p < 0.01.
RPE Drives Microglia Activation through the Akt2 Signaling Pathway
The RPE is the first cells that get affected in dry AMD [7]. It is known that diseased RPE cells trigger infiltration of the major retinal resident immune cells, microglia, into the SRS, thereby exacerbating the inflammatory processes as evident from elevated levels of pro-inflammatory cytokines and homing of peripheral immune cells such as neutrophils, monocytes, and mast cells into the retina [8][9][10][11]15,28]. This heightened inflammatory state subsequently contributes to RPE loss and photoreceptor degeneration in AMD [2][3][4]. To understand the underlying mechanisms by which the RPE cells could induce microglia activation during AMD pathogenesis, we performed scRNAseq analysis on cells isolated from the SRS (including RPE and choroid) of aged (15 months old) Cryba1 fl/fl and Cryba1 cKO mice [23] and identified 35 different cell clusters including the RPE [23] along with immune cell (microglia, neutrophils, monocytes) populations ( Supplementary Figure S2), that have been previously documented to be important in AMD pathogenesis [10][11][12][13][14][15][16][17][18]. Further, in-depth analysis revealed several differentially expressed genes in the RPE and microglial cells (Figure 3b,c). To pinpoint the signaling molecules associated with RPE-microglia interaction in the disease state of this mouse model, we performed a novel ligand-receptor (LR) loop bioinformatic analysis [25] that showed several interactive partners linking RPE (Ligand; L) and microglia (Receptor; R) in 15-month-old Cryba1 fl/fl and Cryba1 cKO (Figure 3d). The LR loop interaction between RPE and microglia revealed a notable increase in the interactions between cognate ligands from RPE cells (such as CXCL12 and Col4a3) and their respective cell surface receptors such as Itgb1 and CD47 on microglia. We also observed a decrease in CD11b on microglia in aged Cryba1 cKO (Figure 3d), a change previously shown to be important for pathogenic accumulation of these immune cells in the SRS leading to chronic inflammation and retinal
RPE Drives Microglia Activation through the Akt2 Signaling Pathway
The RPE is the first cells that get affected in dry AMD [7]. It is known that diseased RPE cells trigger infiltration of the major retinal resident immune cells, microglia, into the SRS, thereby exacerbating the inflammatory processes as evident from elevated levels of pro-inflammatory cytokines and homing of peripheral immune cells such as neutrophils, monocytes, and mast cells into the retina [8][9][10][11]15,28]. This heightened inflammatory state subsequently contributes to RPE loss and photoreceptor degeneration in AMD [2][3][4]. To understand the underlying mechanisms by which the RPE cells could induce microglia activation during AMD pathogenesis, we performed scRNAseq analysis on cells isolated from the SRS (including RPE and choroid) of aged (15 months old) Cryba1 fl/fl and Cryba1 cKO mice [23] and identified 35 different cell clusters including the RPE [23] along with immune cell (microglia, neutrophils, monocytes) populations (Supplementary Figure S2), that have been previously documented to be important in AMD pathogenesis [10][11][12][13][14][15][16][17][18]. Further, in-depth analysis revealed several differentially expressed genes in the RPE and microglial cells (Figure 3b,c). To pinpoint the signaling molecules associated with RPEmicroglia interaction in the disease state of this mouse model, we performed a novel ligandreceptor (LR) loop bioinformatic analysis [25] that showed several interactive partners linking RPE (Ligand; L) and microglia (Receptor; R) in 15-month-old Cryba1 fl/fl and Cryba1 cKO (Figure 3d). The LR loop interaction between RPE and microglia revealed a notable increase in the interactions between cognate ligands from RPE cells (such as CXCL12 and Col4a3) and their respective cell surface receptors such as Itgb1 and CD47 on microglia. We also observed a decrease in CD11b on microglia in aged Cryba1 cKO (Figure 3d), a change previously shown to be important for pathogenic accumulation of these immune cells in the SRS leading to chronic inflammation and retinal degeneration [11,14]. We have also previously shown that RPE cells from the Cryba1 cKO mice express high levels of several pro-inflammatory cytokines and chemokines [17]. To further assess the levels of pro-inflammatory chemokines in the RPE cells, we performed a cytometry bead array (CBA) on RPE lysates from 10-month-old WT, Cryba1 cKO and Akt2 KI mice. Our results showed significant upregulation of CXCL10, CCL3, CCL4, and CXCL5 in Cryba1 cKO and Akt2 KI RPE cells, relative to controls (Figure 3e). Chemokines, such as CXCL10 and CCL4 have been previously documented to be essential for microglial activation [29,30], whereas CXCL10, CCL3, and CXCL5 have been associated with AMD pathogenesis [31,32]. It is likely that these pro-inflammatory mediators are secreted by the RPE, but infiltrating immune cells at the SRS could also contribute to the levels of these cytokines as explained previously [12,14,16,17], and such pro-inflammatory contributions from different cell types at the SRS might be responsible for the age-dependent activation and migration of microglia and other immune cells into the SRS in these mouse models.
degeneration [11,14]. We have also previously shown that RPE cells from the Cryba1 cKO mice express high levels of several pro-inflammatory cytokines and chemokines [17]. To further assess the levels of pro-inflammatory chemokines in the RPE cells, we performed a cytometry bead array (CBA) on RPE lysates from 10-month-old WT, Cryba1 cKO and Akt2 KI mice. Our results showed significant upregulation of CXCL10, CCL3, CCL4, and CXCL5 in Cryba1 cKO and Akt2 KI RPE cells, relative to controls (Figure 3e). Chemokines, such as CXCL10 and CCL4 have been previously documented to be essential for microglial activation [29,30], whereas CXCL10, CCL3, and CXCL5 have been associated with AMD pathogenesis [31,32]. It is likely that these pro-inflammatory mediators are secreted by the RPE, but infiltrating immune cells at the SRS could also contribute to the levels of these cytokines as explained previously [12,14,16,17], and such pro-inflammatory contributions from different cell types at the SRS might be responsible for the age-dependent activation and migration of microglia and other immune cells into the SRS in these mouse models. We have previously used Cryba1 KO RPE explants for harvesting RPESM for in vitro experiments since in Cryba1 KO mice, the Cryba1 gene that encodes the βA3/A1-crystallin protein, is completely absent in all RPE cells, whereas the Cryba1 cKO mice retain about 15% of the normal complement of βA3/A1-crystallin protein due to the mosaic expression of Best1-Cre, which was used to generate Cryba1 cKO animals [23]. In addition, Cryba1 KO mice develop a heightened inflammatory response with age and an early AMD-like pheno-type [16,21]. To confirm that pro-inflammatory mediators secreted by RPE could trigger microglial activation, RPESM harvested from RPE explant cultures (24 h culture) from aged WT, Cryba1 KO (complete knockout), and Akt2 KI mice was added to cultures of mouse microglia. After 24 h (Figure 4a), Western blot analysis showed significant upregulation of Akt2, a known regulator/activator of pro-inflammatory (M1) microglia [33] in lysates from Cryba1 KO and Akt2 KI RPESM-exposed microglia, compared to those from WT RPESMexposed microglia (Figure 4b,c). Further, CBA analysis from microglia spent medium (MSM) revealed significantly increased levels of pro-inflammatory (M1) mediators [34] such as TNFα, IL-12, and IL-6 and a decrease in the level of the M2 mediator IL-10 [34], in the MSM of Cryba1 KO and Akt2 KI RPESM-exposed microglial cells (Figure 4d). Compared to WT RPESM treated microglia, cells exposed to Cryba1 KO and Akt2 KI RPESM showed significantly increased levels of the chemokines CXCL10, CCL4, CXCL13 (Figure 4e), which are known to be activated during M1 transition in macrophage-lineage cells, e.g., microglia [29,35]. Interestingly, inhibition of Akt2 in microglia using a specific Akt2 inhibitor, CCT128930 [17] (5 nM) for 24 h significantly decreased pro-inflammatory cytokines and chemokines levels released from microglia even following exposure to Cryba1 KO or Akt2 KI RPESM (Figure 4d,e). These results demonstrate that targeting Akt2 can reverse microglia activation in vitro.
Microglia and Neutrophils Interact in the SRS of a Mouse Model of Dry AMD
Activated microglia are known to induce retinal degeneration in AMD as well as other neurodegenerative diseases [2,3,11,12,15,18,34]. We speculate that these activated microglia further exacerbate retinal inflammation in the mouse models by secreting proinflammatory cytokines/chemokines that would activate other innate immune cells, such as neutrophils. It is known that peripheral immune cells such as neutrophils infiltrate the SRS of a mouse model as well as in human AMD donor retina and play a critical role in the para to chronic inflammatory transition during AMD progression [16,17]. Microglianeutrophil interaction has previously been shown to be important in other age-related diseases such as Alzheimer's disease [36]. Currently, the extent that microglia activate neutrophils or how these immune cells interact with each other during AMD progression in the SRS remains unclear. To understand the dynamic interactions between these immune cells, we used LR loop bioinformatic analysis [25], which showed several interactive partners linking microglia and neutrophils and several differentially expressed microglial ( Figure 3c) and neutrophil (Figure 5a) genes. Interestingly, we found that both CD14 (microglia)/integrin β1 (Itgβ1) (neutrophil) and CD14/integrin α4 (Itgα4) interactions were upregulated in Cryba1 cKO retina (Figure 5a,b). CD14 is a pro-inflammatory molecule that is associated with the M1 phenotype in microglia and macrophages [37,38]. CD14 is also upregulated in microglia in neurodegenerative processes [37]. Moreover, it has been previously shown that CD14 is a ligand for integrins [39] and Itgβ1 as well as Itgα4 is known to be associated with increased neutrophil adhesion and transmigration into tissues [17,40]. Further, we have previously shown that Itgβ1 is essential for neutrophil homing into the retina [17]. Here, we tried to ascertain if RPE-derived pro-inflammatory factors can induce CD14 expression on microglia. To address this question, we cultured mouse microglia in RPESM harvested from WT, Cryba1 KO and Akt2 KI RPE explant cultures for 24 h and using flow cytometry and found a significantly increased CD14 levels in the Cryba1 KO and Akt2 KI RPESM-exposed microglia; interestingly, these increases were rescued upon Akt2 inhibitor treatment (Supplementary Figure S3a-c). These results suggest that pro-inflammatory molecules released by the RPE could activate microglial CD14 expression through Akt2 signaling.
Further, to assess the role of microglial CD14 in regulating neutrophil adhesion molecules, mouse bone marrow-derived neutrophils were co-cultured with the RPESMexposed microglia from all genotypes with or without CD14 neutralizing antibody (nAb) for 4 h (Figure 5d). Using flow cytometry, we found significantly elevated levels of integrins α4 (PE-A fluorescence) and β1 (FITC-A fluorescence) in neutrophils co-cultured with Cryba1 KO and Akt2 KI RPESM-exposed microglia (Figure 5e-h and Figure S4a-c). This effect was prevented by CD14 nAb treatment (Figure 5e-h). Moreover, no significant difference in integrins α4 and β1 expression was observed between untreated (control) neutrophils and neutrophils co-cultured with WT RPESM-exposed microglia (Supplementary Figure S4d-g). These results clearly indicate that the RPE-mediated activation of pro-inflammatory mediators on the microglial cell surface regulates the expression of neutrophil integrins, which are key to their adhesion and tissue infiltration into tissue.
15% of the normal complement of βA3/A1-crystallin protein due to the mosaic expression of Best1-Cre, which was used to generate Cryba1 cKO animals [23]. In addition, Cryba1 KO mice develop a heightened inflammatory response with age and an early AMD-like phenotype [16,21]. To confirm that pro-inflammatory mediators secreted by RPE could trigger microglial activation, RPESM harvested from RPE explant cultures (24 h culture) from aged WT, Cryba1 KO (complete knockout), and Akt2 KI mice was added to cultures of mouse microglia. After 24 h (Figure 4a), Western blot analysis showed significant upregulation of Akt2, a known regulator/activator of pro-inflammatory (M1) microglia [33] in lysates from Cryba1 KO and Akt2 KI RPESM-exposed microglia, compared to those from WT RPESM-exposed microglia (Figure 4b,c). Further, CBA analysis from microglia spent medium (MSM) revealed significantly increased levels of pro-inflammatory (M1) mediators [34] such as TNFα, IL-12, and IL-6 and a decrease in the level of the M2 mediator IL-10 [34], in the MSM of Cryba1 KO and Akt2 KI RPESM-exposed microglial cells (Figure 4d). Compared to WT RPESM treated microglia, cells exposed to Cryba1 KO and Akt2 KI RPESM showed significantly increased levels of the chemokines CXCL10, CCL4, CXCL13 (Figure 4e), which are known to be activated during M1 transition in macrophage-lineage cells, e.g., microglia [29,35]. Interestingly, inhibition of Akt2 in microglia using a specific Akt2 inhibitor, CCT128930 [17] (5 nM) for 24 h significantly decreased pro-inflammatory cytokines and chemokines levels released from microglia even following exposure to Cryba1 KO or Akt2 KI RPESM (Figure 4d,e). These results demonstrate that targeting Akt2 can reverse microglia activation in vitro. showing that Akt2 is upregulated in microglial cells following exposure to RPESM from Cryba1 KO and Akt2 KI RPE explant cultures, relative to WT RPESM-exposed microglia. (d) Cytometry bead array analysis revealed significant upregulation of M1 mediators TNFα, IL-12, and IL-6 and downregulation of the M2 mediator IL-10 in MSM from Cryba1 KO and Akt2 KI RPESM-exposed microglia relative to WT RPESM-exposed cells. This trend indicates a transition to the pro-inflammatory M1 state in these microglia. (e) Chemokines such as CXCL10, CCL4, and CXCL13 were also increased in the spent medium from Cryba1 KO and Akt2 KI RPESM-exposed microglia, compared to controls. Surprisingly, adding Akt2 inhibitor to the microglia culture medium rescued the levels of these pro-inflammatory mediators to near control values (d,e). n = 3. * p < 0.05, ** p < 0.01. harvested from RPE explant cultures (24 h culture) from aged WT, Cryba1 KO and Akt2 KI mice was added to culture media for mouse microglia for 24 h. Then, neutrophils were co-cultured with the RPESM-exposed microglia with or without CD14 nAb (1 µg/mL) for 4 h. (e,f) Flow cytometric fluorescence plot and (g,h) graphs showing increased expression of integrin α4 (e,g) and integrin β1 (f,h) in the neutrophils co-cultured with microglia which were exposed to RPESM from Cryba1 KO and Akt2 KI RPE explant cultures, compared to neutrophils co-cultured with WT RPESM-exposed microglia. Treatment with CD14 nAb reduced the expression of both integrins on neutrophils (e-h), even after co-culture with activated microglia (Cryba1 KO and Akt2 KI RPESM exposed cells). n = 4. * p < 0.05, ** p < 0.01. KO and Akt2 KI mice was added to culture media for mouse microglia for 24 h. Then, neutrophils were co-cultured with the RPESMexposed microglia with or without CD14 nAb (1 µg/mL) for 4 h. (e,f) Flow cytometric fluorescence plot and (g,h) graphs showing increased expression of integrin α4 (e,g) and integrin β1 (f,h) in the neutrophils co-cultured with microglia which were exposed to RPESM from Cryba1 KO and Akt2 KI RPE explant cultures, compared to neutrophils co-cultured with WT RPESM-exposed microglia. Treatment with CD14 nAb reduced the expression of both integrins on neutrophils (e-h), even after co-culture with activated microglia (Cryba1 KO and Akt2 KI RPESM exposed cells). n = 4. * p < 0.05, ** p < 0.01.
Activated Microglia Drive Neutrophil Activation and Subsequent Retinal Degeneration
In addition to adhesion and transmigration due to elevated expression of cell surface integrins, activated neutrophils express high levels of lipocalin-2 (LCN-2), myeloperoxidase (MPO), and activated neutrophil extracellular traps (NETs), which have been shown to be critical in AMD pathogenesis [17]. To pinpoint if activated (M1) microglia could induce neutrophil activation, we co-cultured mouse bone marrow-derived neutrophils with microglia pre-exposed (for 24 h) to RPESM from aged WT, Cryba1 KO, and Akt2 KI RPE explants. After 4 h culture (Figure 6a), we evaluated the levels of LCN-2 and MPO. Our results showed that both LCN-2 and MPO levels were significantly upregulated in neutrophils co-cultured with microglia exposed to Cryba1 KO and Akt2 KI RPESM, compared to WT RPESM-treated microglia (Figure 6b). Interestingly, upon Akt2 inhibitor treatment, LCN-2 and MPO levels showed noticeable decline, even after co-culture with microglia exposed to either Cryba1 KO or Akt2 KI RPESM (Figure 6b). We next assessed the degree of neutrophil extracellular traps (NETs) formation. NET formation is a phenotypic change typically observed in activated neutrophils which is characterized by extended nuclear processes (stained with DAPI) [41] and has been documented to be critical for degenerative processes both in AMD and Alzheimer's disease [17,42]. Our results showed that compared to WT RPESM-treated microglia, neutrophils co-cultured with microglia which were exposed to Cryba1 KO and Akt2 KI RPESM showed extensive NET formation (Figure 6c), which could also be rescued upon Akt2 inhibitor treatment (Figure 6c), further establishing that M1 microglia can activate neutrophils. Given that we have previously shown that NOD-SCID (immune compromised) mice exhibit extensive retinal degenerative changes seven days after sub-retinal injection of activated neutrophils [17], we asked whether activated neutrophils induced early RPE morphological changes in NOD-SCID mice through Akt2 signaling. We injected (sub-retinally) neutrophils [17] that had been co-cultured with WT, Cryba1 KO, or Akt2 KI RPESM-exposed microglia, with or without Akt2 inhibitor treatment, into the sub-retinal space of NOD-SCID mice (Figure 6d). After three days we evaluated the extent of RPE morphological changes by phalloidin staining (Figure 6d). Our results showed that mice injected with neutrophils co-cultured with Cryba1 KO and Akt2 KI RPESM-exposed microglia triggered obvious morphological abnormalities in the central region of the RPE flatmount as evident from disorganization of honeycomb-like morphology and enlargement of cell size, as compared to RPE flatmounts from mice injected with neutrophils co-cultured with WT RPESM exposed microglia (Figure 6e). Interestingly, when the microglia were treated with Akt2 inhibitor prior to co-culture, the neutrophil injection did not produce extensive alterations in the RPE morphology (Figure 6e), likely because Akt2 inhibition also reduced microglial activation (Figure 4d,e) and subsequent neutrophil activation (Figure 6b,c). These results suggest that targeting Akt2 to reduce microglia-mediated inflammation and subsequent neutrophil activation can mitigate RPE degeneration during AMD progression.
Discussion
In AMD, inflammation is now thought to be a key factor in disease progression [2][3][4]. Intensified para-inflammation results in chronic inflammation that drives the disease process and ultimately retinal degeneration [8][9][10][11]. In animal models and human AMD patients, microglia and monocytes were found to be the dominant cells associated with disease progression [2,3,[11][12][13][14][15]. Seminal previously published research has demonstrated that both microglia and monocytes infiltrate the SRS in the disease state [11][12][13][14][15]. It is known that the retina has two distinct microglia pools differing by niche and IL-34 dependency [12]. In the normal retina, IL-34-dependent microglia contribute to neuronal function, whereas, during degeneration, these populations of microglia move toward the RPE cells into the SRS, an inducible disease-associated niche [12]. This microglial transition results from extensive transcriptional reprogramming of microglia, characterized by reduced expression of homeostatic checkpoint genes and upregulation of injury-responsive genes [12]. Genetic polymorphisms associated with AMD have been linked with the activation of microglia and monocytes, their infiltration into the SRS, and the subsequent induction of pathogenic chronic inflammation [26,27]. As an example, genetic variants of Cx3cr1 (a microglial homeostatic marker gene that is not expressed by any other cells in the retina), apolipoprotein E2 isoform (APOE), and complement factor H (Y402H) are associated with increased accumulation of microglia and monocytes in the SRS in human AMD patients [11][12][13][14][15]. In animal models, this accumulation of immune cells has been shown to trigger photoreceptor degeneration [11][12][13][14][15][16][17]. Moreover, in mice loss of Cx3cr1 induces a pro-inflammatory activation of microglia and triggers sub-retinal accumulation of monocytes [11,12,15]. Importantly, the age-dependent accumulation of subretinal monocytes in Cx3cr1-deficient mice is associated with a significant degeneration of rods and cones [11,43].
Microglia and monocytes are now established as major immunomodulators in AMD [11], but it is hard to believe that these immune cells are the only ones responsible for disease progression in a multi-factorial disease such as AMD. Currently, the specific roles of other immune cells, particularly neutrophils, remain uncertain in AMD pathogenesis. It has previously been shown that the number of neutrophils in the peripheral blood of neovascular (wet) AMD patients is highly elevated, and that the resolution of the inflammation marker (CXCR2) on the neutrophils is downregulated [44]. In addition, we have shown that neutrophils infiltrate the retina in human dry/atrophic AMD patients [16,17] and in a mouse model exhibiting a slow progressive atrophic (early/dry) AMD-like pathology [17]. This neutrophil infiltration correlates with development of an aging-related chronic inflammatory response [16,17]. Our studies revealed that activation of Akt2 signaling in the RPE triggers neutrophil infiltration in human AMD patients and in our mouse model, and that this infiltration is associated with retinal degeneration [17], a finding confirmed by the fact that inhibition of Akt2 in the mouse model reduced neutrophil infiltration and alleviated early RPE changes [17].
The infiltration of microglia and neutrophils into the SRS during AMD progression is well documented [11,12,17] but a major gap in our understanding is how these immune cells interact with each other in the diseased state. It is known that under normal physiological conditions, the homing of neutrophils into the retina is restricted as they are effectively eliminated by microglia, thereby maintaining tissue homeostasis [10,11]. However, the nature of their interactions in the retina during AMD progression is unknown. In other diseases such as Alzheimer's disease, stroke, and intracerebral hemorrhage, it has been previously documented that activated microglia facilitate neutrophil survival, activation, and migration leading to tissue damage [45][46][47]. To assess the role of microglia/neutrophil interaction in AMD progression, we performed scRNASeq analysis on cells from the SRS isolated from Cryba1 cKO mice employing a novel bioinformatic tool, LR-loop [25]. This identified several interacting partners linking the two cell types, suggesting that microglianeutrophil interaction is an important factor in the AMD-like pathology in this mouse model. It is likely that the chronic inflammatory transition in AMD pathogenesis results from alterations in these microglia-neutrophils interactions, where protective microglia would normally eliminate any infiltrating neutrophils during the normal aging process [36]. However, as a consequence of genetic predisposition or lifestyle/environmental factors microglia can transition into an M1 (pro-inflammatory state) which in turn could activate other immune cells and lead to retinal degeneration [8][9][10][11].
In this study, we provide novel evidence regarding the role of microglia/neutrophil interaction in AMD pathogenesis. We observed that RPE cells drive the transition of microglia into a pro-inflammatory (M1) phenotype leading to their migration into the SRS in our mouse models. It has been previously shown that RPE-derived inflammatory factors trigger immune cell activation [11][12][13][14][15][16][17]. We show that the process of microglia activation is dependent on Akt2 signaling and that inhibiting Akt2 in microglia reduced the levels of secreted pro-inflammatory mediators (Figure 7). Akt2 has been shown to be indispensable for M1 transition in macrophage-lineage cells [29,33,48]. Macrophages from Akt2 knockout mice remain in an anti-inflammatory (M2) phenotype even after exposure to LPS, a known activator of the M1 phenotype [48]. Targeting Akt2 in activated microglia using gene silencing approaches has also been previously shown to curb the M1 phenotype and confer protection in animal models of demyelination [33]. We also show that M1 microglia trigger neutrophil activation (elevated LCN-2, MPO and NET formation) that subsequently induces early RPE morphological changes in NOD-SCID mice. Further, our study shows that CD14 on M1 microglial cell surface can regulate the expression of neutrophil adhesion proteins such as integrin β1 and α4 (Figure 7), which have been previously shown to be important for neutrophil migration and activation [17,40]. CD14 is an important ligand for integrins and shows an activation state-dependent binding thereby maintaining integrin expression on the cell surface [39]. Elevated levels of CD14 have been associated with other neurodegenerative diseases such as Alzheimer's disease [37]. Interestingly, inhibiting CD14 and microglial activation in vitro via Akt2 inhibition resulted in rescue of neutrophil activation and reduction of cell surface adhesion protein levels (Figure 7). These results strongly suggest that microglia-neutrophil interaction is important in the pathogenesis of AMD. Future studies should aim to elucidate the nature of these intercellular processes as a means of identifying new targets and approaches for treating or preventing dry AMD. | 9,905 | 2022-11-01T00:00:00.000 | [
"Biology"
] |
Student transitions to blended learning: an institutional case study
This paper examines the experiences of students transitioning to blended learning in the University of Glasgow as part of the QAA Enhancement Themes work on Student Transitions. We draw here on exploratory, qualitative research to examine the benefits, challenges and skills developed by students during transitions to blended learning as a means of advancing understanding, and informing future curriculum design. Data from home undergraduate and international postgraduate students were collected over two years through focus groups, individual interviews and end-of-course quality assurance surveys. We found that while home/undergraduate and international/postgraduate students have similar transition experiences, international taught postgraduates encounter additional challenges in terms of acclimatising to UK higher education (HE), especially within shorter programmes of study and where pedagogical and language differences exist. The findings are integrated in a conceptual framework highlighting the importance of access, acculturation (attitudes) and attributes (skills) to enable learner autonomy to engage effectively in blended learning. The findings have implications for institutional infrastructure, curriculum design and learner development. Further research is required to collect a larger data set as a means of developing the study’s conceptual framework, in order to better understand and support diverse student transitions to blended learning.
Introduction
There is an increased awareness of the importance of blended learning in the higher education sector (Garrison & Vaughan, 2008), especially as the changing digital landscape continues to shape learning and teaching.Blended learning is commonly defined as a combination of face-to-face learning with technology-based or online learning (Driscoll, 2002;Garrison & Kanuka, 2004;Graham, Woodfield, & Harrison, 2013).We subscribe to Garrison and Kanuka's (2004) definition of blended learning, where the two components are optimally integrated to make the most effective use of both modes of study.There is a general expectation among today's students that technology-enhanced learning will be an integral part of university experience and that they will need to adapt to this new approach of learning (Beetham, White, & Wild, 2013;Dzakiria, Mustafa, & Bakar, 2006).This adaptation is not only relevant in gaining knowledge and learning skills, but also has implications for future career prospects (Concannon, Flynn, & Campbell, 2005) in terms of social and digital literacies.For teachers, developing an effective approach to blended learning requires relevant skills and experience in designing and facilitating blended learning opportunities (Dzakiria et al., 2006), informed by a good understanding of students' experiences (Carswell, Thomas, Petre, Price, & Richards, 2000).This is perhaps one reason Dzakiria et al. (2006) called for teachers to have an in-depth understanding of learners' needs and additional skills and experiences when facilitating, designing and building the learning systems and support mechanisms that encourage blended learning.
A review of the literature over the past 15 years reveals several perspectives on students' experiences of blended learning (Concannon et al., 2005;Cramphorn, 2004;Garner & Rouse, 2016;Stacey & Gerbic, 2006).For example, Cramphorn (2004) found that psychological factors, lack of social cues and time management may all act as opportunities or barriers to student participation in blended learning.Concannon et al. (2005) noted that students' attitudes to computers, study patterns and future career plans may either enhance or inhibit student use of information and communication technology (ICT).Blended learning was also linked to a higher level of student engagement (Stacey & Gerbic, 2006) and self-regulated learning (Smyth, Houghton, Cooney, & Casey, 2012), and was seen to provide flexible access to learning resources with the added advantage of allowing students time to review composed thoughts, making reflection a part of the learning process (Salmon, 2004).Communication related benefits have also been recorded, especially for international students with English as a second language (Thompson & Ku, 2005).There is the argument that blended learning presents fewer language barriers and resolves the problem of speed and accent in conversations (Ku & Lohr, 2003;Stacey & Gerbic, 2006).Other studies point to the benefit of developing social and digital literacies that will be useful in virtual team working (Clark & Gibb, 2006;Dineen, 2005;Olson-Buchanan, Rechner, Sanchez, & Schmidtke, 2007).
Despite the benefits and opportunities brought about by blended learning, growing interest in this form of learning and teaching has also served to make its limitations more apparent.For example, Owston, York, and Murtha (2013) raised concerns as to whether blended learning is suitable for all students: the study found that higher achieving students were most satisfied with blended learning and preferred this method to a face-to-face class in terms of convenience, engagement, and effectiveness of learning.Low achievers were observed to need the additional structure found in the traditional face-to-face class as they lacked the independent study skills required for blended learning.Lam (2015) argued that learning autonomy is key for students to succeed in blended learning.Another area of caution relates to students worrying about spelling and grammatical errors, which could be a distraction to learning and engagement (Thompson & Ku, 2005).Concerns have also been raised around technology invading personal time and space (Smyth et al., 2012), that may bring about stress and information overload, difficulty adjusting to the structure of online courses, lack of time management and self-motivation (Marino, 2000, cited in Tsai, Shen, & Tsai, 2011).Other challenges include intermittent technical problems (Concannon et al., 2005), limited interactions and differences in learning orientation (Dzakiria et al., 2006), and social barriers such as a lack of confidence to publically participate in online forums (Cramphorn, 2004).
While such studies have explored student experiences of blended learning generally, relatively little attention has been given to the similarities and differences that exist between different cohorts of students.One study, by Ku and Lohr (2003), found similarities between Chinese and American students' attitudes toward online learning.While the American students reported feelings of isolation and frustration over a lack of oral or verbal communication, they acknowledged the benefits of learning flexibility in terms of pace, space (no travelling required) and the ability to reflect upon written thought.Chinese students shared similar experiences but encountered additional challenges due to differences in cultural values, language and approaches to learning.However, Chinese students expressed more confidence in their communication, feeling empowered due to the opportunity to reflect on their compositions prior to posting them online.This is important from the perspective of the substantially growing numbers of international students studying in UK HE (UKCISA, no date), and the fact that students are not homogenous but differ in expectations and experiences of UK HE learning and teaching.Consequently, there is a need to better understand how students vary in their expectations and experiences of blended learning, given the increased emphasis on technology-enhanced learning in HEIs (Gordon, 2014) and increasing number of international students in UK HE.
Research at the University of Glasgow
The University of Glasgow (UofG) launched three rounds of investment in blended and online learning development (known as the BOLD project).Drawing on support from QAA Scotland, we explored student and staff experiences of this online provision.This particular study comprised a number of pragmatic questions that relate to the student experience, influenced by our own and others' work on student transitions.Focusing specifically on blended (rather than fully online) programmes, we explored student expectations, perceived benefits and challenges with online provision, and skills required/developed during the transition to blended learning.Staff perspectives on transitions to blended learning are explored in a separate paper (Adekola, Dale, & Gardiner, in review).
Methodology
Although we have a number of students engaged in blended learning courses, this was a small, exploratory study (akin to a pilot study).In order to include undergraduate (UG) and postgraduate taught (PGT), as well as home (H) and international (I) students, we drew on a purposive sample of students (Cohen, Manion, & Morrison, 2000), by approaching students on three courses across three of our academic Colleges who were newly introducing blended approaches.Sampling was also opportunistic (Cohen et al., 2000) as we relied on volunteers.We found that PGT students were more responsive to invitations than UG students, and thus relied on secondary data from quality assurance surveys to capture UG student input, although we did have an opportunity to add specific open questions to the end-of-course surveys to align with the study.These included students' motivations to engage in a blended course/programme and their expectations, perceived benefits and challenges/barriers, support needs, and skills developed.Ethical approval for the study was sought from and granted by the UofG's College of Social Sciences' Research Ethics Committee.
Our approach presents several methodological challenges in terms of response rates, comparability of cohorts, and the nature of the data and its fit with our research questions.However, as a small scale, initial study, we believe the approach offers insights as long as the methodological aspects are understood.We make no attempt at generalisability within the programmes and the UofG generally, nor to wider audiences, and offer this work as a case study that has the potential to inform other institutions about the diverse experiences of students transitioning to blended learning.The study does not make any epistemological/ontological assumptions other than being post-positive or interpretivist in design, valuing the perceptions and experiences of individual participants.
Courses were delivered through a dedicated Moodle course that included online discussion forums, videos, quizzes, homework and collaborative activities.School B additionally made use of Aropa for online peer assessment.As all UofG students use Moodle to support their learning on traditional face-to-face courses, the notable difference with the blended courses is that the predominant medium of delivery and much of the interaction differs, albeit that the technology itself will be familiar.
Data collection
In session 2014/15, students were recruited from a blended postgraduate course in School A (cohort size = 269).In session 2015/16, students were recruited from four online undergraduate classics courses in School B (total cohort size = 31), and additional homebased and international participants were recruited from a blended postgraduate course in School C (cohort size = 67).Participants from Schools A, B and C (which are in different Colleges) are shown in Table 1.
Data analysis
Data were collected through focus groups and interviews, undertaken and audio-recorded by author JA, and professionally transcribed, with JA verifying the accuracy of the transcripts.Data from the quality assurance end-of course online surveys were extracted verbatim.The data were independently analysed by two researchers (JA and VHD), using an inductive approach to thematic analysis as defined by Braun & Clarke (2006).It should be noted that only PGT students contributed to the interview/focus groups so the quotes reflect their perspectives primarily.
Thematic analysis from an inductive perspective involved first reading the transcripts and survey data through fully, then identifying codes to represent meaningful responses, grouping the codes into categories, and then reflecting on overall themes.Two researchers (JA and VHD) negotiated the final coding outcome, by reviewing transcripts against the two proposed coding systems, and coming to an agreement regarding the wording we had each used to code particular responses.The overall themes reflect the original questions around students' expectations, benefits, challenges and skills developed.Due to brevity we excluded participants' responses around their motivations to engage in blended learning, since for some cohorts there was no choice, and also support needs, as these are considered in the context of the conceptual model.
Expectations of blended learning
It was clear from all cohorts that students typically expected that blended learning would be easy to use, accessible, and that most study involves some degree of online engagement; as one explained: Yeah, I think it's a new way.And I think this way is penetrating the education system, so we can't say no to the distance learning or the video.We can't say that, we have to accept it.(I/PGT).
They also expected to have the same "ease of contact with teachers" (H/UG) as they would in a face-to-face setting.Categories emerging in relation to expectations were convenience and flexibility in learning, equivalence of experience, and normalisation of online learning.
Benefits
Participants were positive about the opportunities and actual convenience brought about by a blended approach in terms of affording them flexibility and control over their learning.Other reported benefits were that blended learning encouraged independent learning, the blended course was easy to use, it was easier to ask questions online, there was the potential for peer learning using forums, and there was an emphasis on active learning.Particularly notable was the sense that students experienced greater equity of participation: Everyone contributes to seminar work rather than a few people doing all of the talking in a face-to-face seminar.(H/UG).This is especially important as our international students previously reported difficulty contributing in face-to-face classes due to cultural and confidence issues (Fischbacher-Smith et al., 2015).It was encouraging, therefore, to hear students speak of feeling more confident and more independent in their learning: We are shy and the teacher can be more active to us… he always chats with us.And maybe we will be more familiar.(I/PGT).
It makes you become more independent… This course can encourage us more to do the research yourself and if you don't understand it, you and ask about it with your tutor or with your classmate… (I/PGT).
Importantly though, as the last quote indicates, independence does not necessarily mean isolation, as students found themselves more confident engaging online.There was also some indication of improved digital literacies and learning literacies generally that we would wish to explore further: I end up using internet, YouTube quite a lot when I'm studying on my own, just to get a better understanding of something, because books are wonderful, lectures are amazing, but sometimes, you could just come up with your own questions, the more you study, the more you think… (I/PGT).
Challenges
Commonly reported challenges for different student groups included time management, reduced face-to-face contact leading to a sense of loneliness, and technical issues that relate to access to material and quality of learning materials.A lack of input from others, or difficulty asking questions online, was also a challenge.It was particularly difficult for some students to feel that they had fully resolved their enquiry or their concern in an online environment, largely due to the inability to engage directly (or synchronously) with the lecturer or their peers: Sometimes when you ask questions on the forum, like they haven't exactly answered what we want to… like to the point.Like they have gone round the bush but not exactly answered.(I/PGT).
Reliable access to learning materials was something that emerged across the cohorts.Understandably, where a course is online, any technical shortcomings such as intermittent Wi-Fi provision, resource links that have expired or poor quality of video become a challenge.International postgraduate students experienced additional difficulties associated with their transition to UK HE: adjusting to a new mode of learning that they had not anticipated, and questioning the rationale of online learning after having travelled to the UK for a face-to-face experience: When [students] come to the UK, we try to feel the culture here, to try for the face-to-face communication, not just watch the online, or why come here?(I/PGT).
I/PGT students also expressed concern that they were ill equipped in terms of language to engage in a blended programme, even where prior experience existed, because they had to first translate the English term associated with a technology into their own language to discern what it is used for.This was particularly the case when using certain software packages: You need to know English, because I swear, I think it would be really difficult to use some of the programs for international students when they were kind of like, I don't even know what that word is, you don't really know how to navigate things. (I/PGT).
Many of these points relate generally to transition into study in UK HE.It would seem here though, that while on the one hand there are benefits from online study in relation to independence, on the other hand there might be a corresponding lack of confidence and reduced sense of connection with study.Some students spoke of feeling "alone" and "lonely" because of the lack of face-to-face contact with teaching staff and other students: Sometimes I feel alone because I want to discuss with my friend when we are having classes.But when I'm just watching those computer screens I feel lonely and helpless!(I/PGT).
Although an online environment might provide a safe place for students to prepare and ask questions, it is nonetheless not an anonymous space and so issues of saving face were still raised, with some students expressing reluctance to contribute in an online setting because of potential embarrassment.
Skills developed
According to the participants in this study, the most important skills they developed through engaging in a blended learning programme included readiness for online learning, digital literacies, time management, written communication and critical thinking skills.The latter is particularly significant given the research-led ethos of the UofG and the skills that we seek to embed from the very beginning of our programmes.International students readily reported greater independent learning, more insight into their own learning, and enhanced facility to do their own research.This suggests that the online setting provides them with the time, space and opportunity for reflection that they expressed they lack in other classroom settings; we might also reflect on what that means for the design of face-to-face teaching and learning.There is also a sense that students develop a better appreciation for the value that their peers bring to any learning situation (social literacies), as this international student explained: I had an interesting experience, once I asked some questions and some other student told me the answers…and I started to learn.I feel stupid but I still continue to ask, I don't care.(I/PGT).
Where students do contribute more readily online, not only is the opportunity for peer learning greater than is perhaps achievable in a large class, but so too is their exposure to the range of possible approaches to any given learning 'problem': Maybe your answer isn't wrong because critical evaluation, because we're not doing the mathematics, we're doing [social science], you have so many answers, so it doesn't matter.(I/PGT).
Discussion
This study sought to explore student transitions into blended learning.Within this, expectations, benefits, challenges, and the skills required to optimally engage in blended learning were identified.It is important to note that the design of our study and our interpretation of findings were informed by our review of literature over the last 15 years.As further research takes place within the sector, our recommendations are subject to change.Nevertheless, we believe the study offers useful guidance for teachers and institutions currently looking to better support student transitions into blended learning.
Students expected to use technology-enhanced learning approaches as part of their studies; this finding is consistent with Concannon et al. (2005) who found that e-learning was an expected part of the university experience.In relation to perceived benefits, students identified the opportunities and convenience brought about by the blended approach affording them flexibility and control over their learning in terms of space, pace and choice of learning strategies, again consistent with Concannon et al. (2005).Students also highlighted increased student participation, ease of communication and peer learning.For international students who have expressed communication difficulties due to lecturers' and students' pace of talking and accent (Fischbacher-Smith et al., 2015), the online environment offers a landscape potentially easier to navigate.
Challenges identified by some students included time management and a reduction in face-to-face contact, which created a sense of isolation and lack of immediate feedback.There were also concerns around technical issues.Some international students also felt that teachers assumed some prior knowledge, which raises the need to balance opportunities for independent learning against the need to meet student expectations for direct contact with their lecturers/tutors and peers.These findings align with those of other studies (Concannon et al., 2005;Smyth et al., 2012;Thompson & Ku, 2005;Marino, 2000, cited in Tsai et al., 2011).While most students experienced challenging transitions to blended learning, these appear to be compounded for international students.A recent scoping study into the transitions of international students to Scottish HEIs by Bell (2016) confirms that international students experience multiple transitions.In addition to the sociocultural and language transitions, Bell (2016) observed difficult transitions in terms of students' expectations of the academic environment and their assessment literacies, particularly for postgraduates from East and South East Asia.Unfortunately, the nature of our study design limits our ability to draw conclusions about the different student cohorts, but previous research undertaken in the University and the work referred to here by Bell (2016) and others does lend further weight to our observations that there are particular challenges for international students that might be heightened in an online setting.
The study also reveals that students require several essential skills to optimally engage in blended learning; these include time management, readiness for online learning, and digital and social literacies.Notably in this study, there was some evidence that the online environment created greater awareness and development of some of these skills.This initial insight poses several associated questions that we would intend to pursue in the future.This is due to the strategic importance of critical thinking, independent learning and the development of a range of skills that enhance student learning and employability.
One final point to note, although one we do not develop in this paper, was that some students believed that blended learning saves (teachers and the institution) money, and there was a potential tension for international students who were paying for a campusbased education but learning in an online environment.This shows that international student are conscious of the value they attribute to face-to-face teaching, relating it directly to the cost of tuition fees, and considering it potentially superior to online learning.These findings are consistent with work elsewhere (Smyth et al., 2012;Stacey & Gerbic, 2006).This may relate more to how we manage expectations and demonstrate the benefits of blended learning within a largely campus-based programme, than to the intrinsic value of blended learning and its relevance or place within a campus-based programme, but it is clear that the educational value of the approach needs to be better articulated to students.
A conceptual model of student transitions to blended learning
Reflecting on our findings, and revisiting the original transcripts, it became clear to us that success in blended learning was ultimately a function of four distinct aspects: access, acculturation (attitudes), attributes (skills) and autonomy.Access to the technology and online learning resources is a fundamental basic requirement that underpins blended learning.Only if students have access can they open their minds to the possibility of blended learning, their attitudes having been informed by previous educational experiences and preferences.Necessary attributes (skills) include digital and social literacies as outlined.Acculturation and attributes may inform each other, since acquiring new skills may influence students' attitudes, or their attitudes may underpin their potential to develop essential skills.Autonomy, the pinnacle of the model, relates to the need for students to have the confidence and competence to engage as independent, reflective learners (Owston et al., 2013), required for effective engagement in blended learning.From the study reported here, we suggest that access, attributes (skills) and acculturation (attitudes) directly influence students' autonomy as learners and are essential in an online environment.Conversely, a lack of access, attributes or acculturation will undermine autonomy, potentially leaving residual dissatisfaction with the learning experience and the University.
This conceptual model has some resonance with the pyramid model of digital literacy developed by Beetham and White (2010), cited in Jisc (2014).It too puts access as a foundation that underpins skills, practices (behaviours) and identity (similar to blended learner autonomy), although it focuses exclusively on digital literacies rather than a holistic consideration of blended learning.
Implications and conclusion
Readers (teachers, learning designers, senior administrators and heads of services) may wish to consider how they may apply the model their own academic context.The issues raised around access emphasise the essential nature of a robust IT infrastructure, including support for BYOD (Bring Your Own Device).These findings are echoed in a separate paper that examines institutional and teacher transitions to blended learning (Adekola et al., submitted).In a literature review of student expectations of the digital learning environment, Beetham et al. (2013) also revealed that students expect Wi-Fi, power sockets, access to content through the institutional VLE (virtual learning environment), and support to bring any number of devices onto campus.These assumptions will increasingly be taken for granted within higher education, not least given the growing use of technology enhanced active learning spaces across the sector internationally (New Media Consortium, 2017).
In terms of curriculum design, teachers and support staff also need to be cognisant of the expectations and prior experiences of students transitioning into blended learning, to address the acculturation and attributes dimensions of the framework.While we believe that all learners should have access to induction and ongoing support for blended and online learning, this is a particularly relevant for international students, who have travelled to the UK for a face-to-face learning experience, having experienced a culturally different pedagogy.Consideration should also be given to when, and for how long, students should be exposed to an online course within a blended programme.In this study one semester was too short a time for international students to acclimatise to blended learning; spreading an online or blended course over a longer period of time might be one alternative, but irrespective of duration, it is essential to provide appropriate study skills support (Fischbacher-Smith et al., 2015).Bell (2016) provides a set of recommendations for supporting international students, to address issues of socio-cultural integration, English language ability, and academic expectations and integration respectively.This includes engaging with students pre-arrival, providing appropriate academic and advice on arrival, and providing staff with appropriate support to ease international student transitions; in this context, this requires academic development around blended learning and teaching as well.One aspect of this transition that is identified in the findings is that while some students clearly benefit from and flourish within the online environment, others do not.It is beyond the scope of the paper (and the data) to explore how to support students who do feel isolated and who disengage from online interactions, but we note it here as an area for further research and careful attention by institutions.Our study has led to the proposal of a conceptual framework with four key elements of transition into blended learning, namely: access, attributes (skills), acculturation and autonomy.Each aspect has implications for institutional infrastructures, curriculum design, and student support and development.For example, reliable access to Wi-Fi is a basic requirement that institutions must consider, as well as student access to devices and learning materials.In relation to students' abilities or skills, teachers need to address the development of digital literacies and other essential attributes within the curriculum design.In terms of acculturation, institutions need to consider learners' previous experiences online learning and higher education, and how these transitions may be best supported, particularly for international postgraduate learners making these transitions within a shorter timescale, who may otherwise experience isolation and dissatisfaction.These areas merit further research and a stronger evidence-base that we intend to pursue in the future as we refine our conceptual framework.
Figure 1 :
Figure 1: 4As in transitions to blended learning
Table 1 :
Participants recruited to the study.
*Standard survey modified to include questions pertinent to this study. | 6,262.2 | 2017-03-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
Manufacture of Micromirror Arrays Using a CMOS-MEMS Technique
In this study we used the commercial 0.35 μm CMOS (complementary metal oxide semiconductor) process and simple maskless post-processing to fabricate an array of micromirrors exhibiting high natural frequency. The micromirrors were manufactured from aluminum; the sacrificial layer was silicon dioxide. Because we fabricated the micromirror arrays using the standard CMOS process, they have the potential to be integrated with circuitry on a chip. For post-processing we used an etchant to remove the sacrificial layer and thereby suspend the micromirrors. The micromirror array contained a circular membrane and four fixed beams set symmetrically around and below the circular mirror; these four fan-shaped electrodes controlled the tilting of the micromirror. A MEMS (microelectromechanical system) motion analysis system and a confocal 3D-surface topography were used to characterize the properties and configuration of the micromirror array. Each micromirror could be rotated in four independent directions. Experimentally, we found that the micromirror had a tilting angle of about 2.55° when applying a driving voltage of 40 V. The natural frequency of the micromirrors was 59.1 kHz.
Introduction
The popularity of portable computers, mobile communication devices, and personal electronics is growing rapidly, with many consumers desiring bright, high-resolution, large-viewing-area, and compact displays. The main challenges when fabricating portable displays and microvision systems are the size, power consumption, and the choice of manufacturing process. Microelectromechanical systems (MEMS) technology, mostly developed during the past two decades, can be used to overcome these challenges.
Micromirrors are among the important optical devices used in scanned display and image systems. They are also applied widely in projection display systems [1], optical scanners [2], optical waveguides [3], and optical switches [4,5], and for signal processing in rear-projection televisions [6]. Micromirrors allow scanners in point-to-point scanning to produce distortion-corrected images on highly curved surfaces. For example, Chiou and Lin [7] developed a torsion micromirror device possessing multiple driving electrodes to investigate the effect on the tilting angle of the arrangement of driving electrodes. Jang and Kim [8] presented a digitally operated micromirror array possessing a torsional spring; they studied the deviations in pull-in voltages obtained using different spring sizes, finding they were clearly affected by the mirror gap and the spring width. Several studies have revealed the feasibility of manufacturing micromirrors using surface or bulk micromachining processes. For instance, Al-Aribe and Knopf [9] proposed a MEMS torsion micromirror for use in optical switches. In general, however, MEMS-based micromirrors may possess some problematic features, such as high process complexity, high driving voltages, high power consumption, and difficulty in combining them with circuits on chips. Standard complementary metal oxide semiconductor (CMOS) processes allow the fabrication of small devices; the CMOS-MEMS technique is the name of the commercial CMOS process used to fabricate MEMS devices [10][11][12][13]. Micromirrors manufactured using the CMOS-MEMS technique have many advantageous properties, including stable processing, ready integration with circuits, and low cost.
Many actuation methods can be used on MEMS actuators, including thermoelectric, pressured, magnetic, and electrostatic force methods. Among these approaches, the electrostatic force method is the most popular for achieving rapid response times, low power consumption, and simple electronics. Many researchers have developed electrostatic actuating micromirrors, such as Texas Instruments' digital micromirror device (DMD) [14], AT&T's optical switches [15], and Cheng and co-worker's mirror integrated with circuitry [16]. Herein, we discuss a rotatable micromirror array that is driven using electrostatic forces.
In this paper, we present a surface-micromachined and rotatable micromirror array fabricated using the commercial 0.35 µm CMOS process. The post-CMOS process in this work that requires only one maskless wet etching process is simpler than that of Cheng et al. [16]. Thereby, the post-process is easy to execute and low cost. Many electrostatically actuated micromirrors not only rotate but also displace downward when they are actuated. We introduce a center pin to prevent such downward displacement. The natural frequency of this micromirror array is suitable for commercial project displays in which the dot pitch is sufficiently small to provide high resolution. Figure 1 displays a schematic representation of a micromirror in our array. Material of the mirror is aluminum. Its structure is circular, suspended and supported by four fixed beams and one pin. The pin, located at the center of the micromirror, is a holder providing the tilt stability of the micromirror; its diameter is 0.5 µm. The diameter of the mirror is 100 µm; each supported beam is 60 µm long, 2 µm wide and 0.5 µm thick. Four quarter-circle fixed electrodes, which are the driving units used to tilt the mirror, are located below the suspended mirror; each fixed electrode controlled one tilting direction. The mirror is actuated by electrostatic force, the strength of which is given by:
Design and Simulation
where ε is the permittivity, A is the overlapping area, d is the distance of the parallel electrodes, and V is the driving voltage applied to the fixed electrode and the mirror. When a dc bias is applied to the mirror and the fixed electrode, the mirror tilted toward the fixed electrode, allowing the mirror to be rotated in the x and y directions. Because the gap between the mirror and the driving electrode is 2.6 µm, the permittivity in air is 8.85 × 10 -12 F/m, and the overlapping area is 1,590 µm 2 , Equation (1) suggests that the electrostatic force of 1.2759 µN would result under an applied driving voltage of 40 V. Figure 2(a) presents a model of the dynamic motion of a micromirror suspended and fixed by four cantilever beams. The micromirror is positioned flat and horizontal to the surface in the absence of a driving voltage. As shown in Figure 2(b), the micromirror is tilted upon applying a driving voltage. Among the four cantilever beams, two are transformed into springs (k) and the other two are equal to torsion springs (k t ). If applying an ac voltage of Vsinωt to the micromirror, the micromirror produces a harmonic motion. The lower electrodes are quarter-circle shape, so their center of gravity are at the position of r . From Newton's law, the equation of motion of the micromirror is given by: and: where θ represents the rotated angle of the mirror when the driving voltage is applied; θ is the angular velocity; θ is the angular acceleration; and F is defined by Equation (1); I m is the mass moment of inertia of the mirror; c is the air damping; r is the radius of the mirror, m is the mass of the mirror; E is the Young's modulus of the cantilever beams and the pin, I is the moment of inertia of the cantilever beams; G is the shear modulus of the torsion bars; J is the polar moment of inertia of the torsion bars; L is the length of the cantilever beams and the torsion bars; k p is the stiffness of the pin; I p is the moment of inertia of the pin; and L p is the length of the pin. Equation (2) can be written as: ..
where k e represents the equivalent stiffness of the mirror; ζ is the damping ratio; ω n is the natural frequency of the mirror, and M 0 is the moment. The particular solution of Equation (7) can be expressed as [17]: where Θ and φ are the amplitude and phase angle of the response, respectively; r ω is the frequency ratio and r ω = ω/ω n . The maximum amplitude occurs when and Equations (1) and (11) into Equation (13), the maximum amplitude of the rotated angle in the micromirror can be obtained: In the design, the equivalent stiffness of the mirror, k e , is about 1.35 × 10 -9 N-m/rad, and the overlapping area of the electrodes, A, is 1,590 μm 2 . Figure 3 shows the maximum amplitude of the micromirror with different damping ratios, which is evaluated by Equation (14). In addition to the equivalent stiffness and the geometric shape of the mirror, the maximum rotated angle of the mirror depends on the driving voltage and the damping ratio.
In accordance with Equation (15), we know that the resonant frequency of the mirror changes as the effective stiffness and the mass moment of inertia of the mirror vary. The equivalent stiffness in Equation (8) contains three factors of k, k t and k p . so it is not easy to determine the values of k, k t and k p according to the equivalent stiffness. In order to easier determine the values of k, k t and k p , the dimensionless resonant frequency is introduced. Substituting Equations (3) and (8) into Equation (15), the dimensionless resonant frequency of the micromirror can be obtained: and: where α and β are the stiffness ratio. Figure 4 shows the dimensionless resonant frequency of the micromirror, which is computed by Equation (16). The results depict that the dimensionless resonant frequency of the micromirror are 4.47 and 10 corresponding to the stiffness ratios of 0 and 10, respectively, at α = 3.
In our design, the value of E is 70 GPa; I is 2.08 × 10 -26 m 4 ; G is 28 GPa; L is 60 μm; I p is 3.07 × 10 -27 m 4 ; L p is 1.4 μm and J is 3.54 × 10 -25 m 4 . We substitute these values into Equations (4), (5) and (6), the stiffness of the cantilever beams, the torsion bars and the pin can be evaluated, and the results are k = 0.081 N/m, k t = 1.646 × 10 -10 N-m/rad and k p = 6.136 × 10 -10 N-m/rad. According to Equations (17) and (18) with r = 50 μm, we yield that the stiffness ratio of α and β are 3 and 0.81, respectively. As shown in Figure 4, the dimensionless resonant frequency is 5.14 at α = 3 and β = 0.81.
The mass of the mirror is 1.158 × 10 -11 kg, so we calculate the resonant frequency of the micromirror to be 68.7 kHz. All symbols are listed in Table 1, and the material and geometric properties in the micromirrors are summarized in Table 2.
Fabrication of the Micromirror Array
The micromirror array was manufactured using the commercial CMOS process of the Taiwan Semiconductor Manufacturing Company (TSMC), according to the micromirror layout defined in Figure 1. Figure 5(a) displays the cross-section of the micromirror after completion of the CMOS process. Because the mirror was fabricated from aluminum, which was highly optically reflective (>90%) [16], it could reduce the degree of optical signal loss resulting from transmission. After performing the standard CMOS process, we obtained the final structure of the micromirror by applying a post-processing procedure to remove the sacrificial layer, and to suspend the mirror. The material of sacrificial layer was silicon dioxide. Figure 5(b) presents the mirror after completion of the postprocess. First, we cleaned the sample with acetone to remove any impurities. Next, we used an oxide etchant of Silox [18] to remove the sacrificial layer in which the mirror was embedded. In order to avoid the sticking problem, the mirror was immersed in isopropyl alcohol for 20 min and baked in oven at 100 °C for 30 min after the wet etching process. Figure 6(c) reveals that the sacrificial layer was removed and the micromirror was successfully suspended. Figure 7 displays the fixed electrodes positioned below the suspended mirror and the supportive pin located at the center of the mirror, revealing that they were not damaged during the wet etching process. The etching time control is very important for the undercut problem. A test-key was used to monitor the etch-stop time and to avoid over etching and undercut problems
Results and Discussion
The profile of the mirror is an important characteristic affecting its performance. We employed a confocal surface optical scan system-a 3D high-resolution, non-contact surface measurement system (Nanofocus) featuring a microscope and a PZT (lead zirconate titanate) actuator-to detect the configuration of the mirror. This system had the ability to calculate the out-of-plane displacement and tilting angle and establish the 3D configuration automatically. The dot pitch is another important parameter affecting the quality of display systems [19]. For a high-resolution display system, the dot pitch is limited to 250 µm. Figure 8(a) presents the profile of the mirror in the absence of a driving voltage. The deformation of this mirror was less than 100 nm, i.e., the mirror was flat and not rotated in the initial state. When the driving voltage was less than 15 V, the displacement of the mirror was small; the largest displacement occurred when the driving voltage was 40 V. Figure 8(b) displays the profile of the mirror under a driving voltage of 40 V. The measured result showed that the displacement of the mirror edge was about 2.10 µm at a driving voltage of 40 V, and the tilting angle of the mirror was about 2.55°. Figure 9 reveals the measured tilting angle of the micromirror under various driving voltages. The results depicted that the tilting angle of the mirror was about 0.24° and 0.95° at the driving voltage of 15 V and 30 V, respectively. Furthermore, we employed finite element method (FEM) software (Coventor Ware) to simulate the tilting angle of the micromirror, and the simulated tilting angle of the micromirror under different driving voltages was shown in Figure 9. The simulated results presented that the tilting angle of the mirror was about 0.21° and 0.74° at the driving voltage of 15 V and 30 V, respectively. As shown in Figure 9, the simulated and measured tilting angles for the micromirror were similar under various driving voltages; the slight differences between individual mirrors resulted from variations in the etching during post-processing. The stiffness of the beams decreased because of the post-process etching, leading to that the measured values being larger than the simulated ones.
We employed a MEMS motion analysis system to detect the frequency response of the micromirrors. This measurement system includes a function generator that can apply high-amplitude ac voltages and a microscope to observe the motion of the micromirror. The analyzer takes many images every second to determine the displacement and frequency response of the vibratile object. Figure 10 reveals the frequency response of the micromirror measured by the MEMS motion analysis system. The measured results showed that the resonant frequency of the micromirror was 59.1 kHz. The vertical axis scan rate is a measure of how many times a display can refresh its whole frame. For video applications, the resolution standards [e.g., SVGA (800 × 600)] require a frame refresh rate of 60 Hz. In this situation, the line rate (horizontal scan) of an SVGA display is 36,000 lines per second. Using a bidirectional-scanning technique, a horizontal scanner operating at 18 kHz requires a scanner retracing rate of 19 -20 kHz [19]. In future, it is likely that consumers will demand higher resolution and more-fluent video play; therefore, the development of micromirror arrays exhibiting high resonant frequency and small dot pitches will be critical to enable commercially viable displays.
Conclusions
We have fabricated a micromirror array using the commercial 0.35 µm CMOS process and the postprocess. Because this fabrication technique was compatible with the CMOS process, such micromirror arrays have potential to be integrated with circuitry on a chip. The low-cost post-processing method involved simple wet etching with a Silox etchant to etch the sacrificial layer. In the absence of a driving voltage, the micromirror array was very flat (deformation: <100 nm). The mirror was rotatable in four independent directions. The experimental results showed that the tilting angle of the micromirror was about 2.55° under an applied driving voltage of 40 V, and the resonant frequency of the micromirror was 59.1 kHz. | 3,679.2 | 2009-08-06T00:00:00.000 | [
"Engineering"
] |
A hybrid computational framework for intelligent inter-continent SARS-CoV-2 sub-strains characterization and prediction
Whereas accelerated attention beclouded early stages of the coronavirus spread, knowledge of actual pathogenicity and origin of possible sub-strains remained unclear. By harvesting the Global initiative on Sharing All Influenza Data (GISAID) database (https://www.gisaid.org/), between December 2019 and January 15, 2021, a total of 8864 human SARS-CoV-2 complete genome sequences processed by gender, across 6 continents (88 countries) of the world, Antarctica exempt, were analyzed. We hypothesized that data speak for itself and can discern true and explainable patterns of the disease. Identical genome diversity and pattern correlates analysis performed using a hybrid of biotechnology and machine learning methods corroborate the emergence of inter- and intra- SARS-CoV-2 sub-strains transmission and sustain an increase in sub-strains within the various continents, with nucleotide mutations dynamically varying between individuals in close association with the virus as it adapts to its host/environment. Interestingly, some viral sub-strain patterns progressively transformed into new sub-strain clusters indicating varying amino acid, and strong nucleotide association derived from same lineage. A novel cognitive approach to knowledge mining helped the discovery of transmission routes and seamless contact tracing protocol. Our classification results were better than state-of-the-art methods, indicating a more robust system for predicting emerging or new viral sub-strain(s). The results therefore offer explanations for the growing concerns about the virus and its next wave(s). A future direction of this work is a defuzzification of confusable pattern clusters for precise intra-country SARS-CoV-2 sub-strains analytics.
However, the dissimilarity in genome sequencing of early viral samples obtained from infected individuals in European, North American, Asian, and Oceanian regions 7 disgorged several studies aimed at analyzing and understanding the evolutionary history and relationships among the different SARS-CoV-2 strains.
SARS-CoV-2 is a β-coronavirus-an enveloped non-segmented positive-sense RNA virus (subgenus-sarbecovirus, subfamily-Orthocoronavirinae) 8 , which proliferation begun in December 2019 in Wuhan China. It has since been confirmed that two strains of the new coronavirus (the L-and S-strains) are spreading around the world today 9 , and the fact that the L-type is more prevalent suggests that it is "more aggressive" than the S-type. Greater proportion of research progress on SARS-CoV-2 has taken the biotechnology dimension 10,11 , specifically focusing on species characterization and variants analysis through features extraction. Consequently, Artificial Intelligence (AI) and Machine Learning (ML) methods are expanding biotechnology capacity into the bioinformatics realm, through intelligent genome probing for precise viral features classification. So far, AI/ML research on SARS-CoV-2 has permeated four key areas of medicine and healthcare, namely, screening and treatment [12][13][14][15] , contact tracing 16 , prediction and forecasting 17,18 , and drugs and vaccine discovery [19][20][21] .
To understand the origin and structure of SARS-CoV-2, a sequence of the viral genetic material is required. Sequencing viral genomes is performed to identify regions of similarity that may have consequences for functional, structural, or evolutionary associations 22 . Furthermore, it can reveal the possibility of future health risks and vaccine remedies. Phylogenetic tree and genomic tree (also referred to as hierarchical clustering) are common determinants for representing genetic diversity and evolutionary relationships of sequenced genomes. While phylogenetic tree reflects slow evolution within the genome (point mutations), hierarchical clustering describes major genetic re-arrangement events (insertions or deletions). Converting massive amount of complete genome sequences into meaningful biological representations has limited progress of discovering viral sub-strains and detailed transmission routes. Although numerous algorithms/tools have evolved to target specific gene sites/ locations for "on-the-fly" online phylogeny representations, incomplete representation and clustering errors abound-as different genome sites undergo different evolutionary changes, resulting in disparate multi-dimensional patterns at different sites. Attempts at estimating phylogenies by comparing entire genomes have been made by focusing mainly on gene content and gene order comparisons. While early attempts concentrated on morphological characters with the premise that direct genes comparison makes more sense, modern attempts use sequences from homologous genes 22 but are burdened by the fact that a gene's evolutionary history may differ from the evolutionary history of the organism, as some genes sufficiently conserved across the species of interest may escape detection. Alignment-free genome comparison methods are therefore becoming popular 22,23 and have evolved to crash the heavy computational requirements of traditional alignment-based methods. Randhawa et al. 24 for instance proposed an alignment-free approach based on ML, for fast, inexpensive, and taxonomic classification of complete COVID-19 genomes in real time.
Variants of SARS-CoV-2 have emerged with reported new peaks of infection. A variant is a strain when it has a different characteristic. Variants with few mutations belong to the same lineage. Lineages are important for showing how a virus spreads through communities or populations. Interestingly, the less virulent strains are disappearing while those showing significant mutant variations prevail. A few documented cases of the spread of the viral sub-strains are observed based on locations, as follows: In USA, 4 sub-strains and 11 top mutations were discovered from the analysis of 12,754 complete SARS-CoV-2 genome sequences, where 2 out of 4 discovered sub-strains were potentially more infectious 25 . These sub-strains and 5 mutants were first detected in China, Singapore, and the United Kingdom 26 . In England, a sub-strain of replicative advantage was also discovered as variant of SARS-CoV-2, characterized by 9 spike protein mutations consisting of 3 deletions and 6 substitutions 27 . Some of these variants were prevalent in Netherlands, Switzerland, and France. In Southwestern Wisconsin, Southeastern Minnesota, Northeast Iowa, the sequencing of whole viral genomes of COVID-19 positive patients showed the spread of sub-strains to individuals in 13 cities from epicenters of the infection 28 . However, no viral sub-strain was observed in China 5 .
Vaccine types are also being circulated with several conspiracy theories and disbeliefs about the virus existence spreading across the globe. There is fear that emerging sub-strain variants may confer resistance to antibody neutralization, as evolving variants of concern are rapidly growing lineage to SARS-CoV-2 with high replicable mutants that may hinder the efficiency of existing vaccines and expand in response to the increasing after-infection or vaccine-induced seroprevalence 27 . Currently, most COVID-19 vaccines target the viral spike protein.
Although mutations may reduce their efficacy, they do not obliterate their effects. Inactivated virus vaccines that target the whole virus have been developed in China, as the immune responses they induce target more than a single part of the spike protein; hence, inducing several protective immune responses and instilling redundancy in the protective immune responses.
Mining additional knowledge from clinical data would assist complete features extraction, missing information recovery, hidden patterns understanding, and facilitate output targets labeling. Most biotechnology/ bioinformatics tools are 'black boxes ' and not open to contributions from the research community including reproducible research. Furthermore, extracted features are incomplete to aid meaningful knowledge integration. To support the growing field of medical-and bio-informatics, this paper adopted a novel approach to genome sequence mining. Transitions in nucleotide (dinucleotide) and changes in gene (mutation) information were exploited as input features or predictors, as these features have direct connection with the behavior of the virus. A hierarchical agglomerative clustering method was applied on the extracted features to detect optimal natural clusters for determining the evolutionary group of the various isolates, across countries. Using a self-organizing map (SOM), genome patterns with low similarity profile (or highly variable genomes) including the reference genome, were discerned to visually establish which sub-strain group(s) the various genome samples or isolates belong. By decoupling the SOM map through correlation hunting, a cognitive map that associates similar isolate clusters was obtained. The generated patterns and isolate similarity information provided details for enriching the input dataset through a supervised labelling of the classification targets. Statistical analysis validated the www.nature.com/scientificreports/ variability of the SARS-CoV-2 isolates. This research has therefore made substantial contributions to knowledge, as it provides the following: (i) Useful Intermediate Results As opposed to most biotechnology and bioinformatic tools, useful intermediate results are produced in this paper to give further insights into the prevalence and transmission of SARS-CoV-2. The research is also replicable, as the algorithms and data are available to reproduce and validate our results. (ii) Support for the Contact Tracing of Undocumented Source of Infection Tracing infectious diseases routes for efficient documentation of infected cases is very crucial in emerging pandemic situations. While the excavated data holds only few traces of transmission history, our pattern clustering and cognitive knowledge mining results groups the various isolates into sub-strain clusters. This information is then used to label the output targets for classification and prediction, hence, providing understanding of which of the viral sub-strain(s) maintain(s) the reference genome pattern or is/are spreading within a particular country or been acquired from a different country. Furthermore, pattern progressions indicating emerging cluster transitions are revealed by the self-organizing map deployed in this study. (iii) Intelligent System Framework From labelled classification targets, accurate sub-strain classification and prediction is achieved. The proposed framework combines machine learning techniques and cognitive knowledge mining to extract dinucleotide and mutation frequencies for base variant analysis. Also, hidden sub-strains interactions between nucleotide sequences and other information not hitherto seen in the raw data are uncovered. (iv) Gender-Specific Isolates Mining To engage meaningful research in SARS-CoV-2, characterization and prediction by gender is crucial. This aspect which is often missing in the literature was excavated from GSAID. A metadata of excavated SARS-CoV-2 genomes by gender is available (Data S7: SupplData7. xlsx). The metadata permits the intelligent mining of SARS-CoV-2 demographic information, as ambiguities in annotation labels inherent in the Global initiative on Sharing All Influenza Data (GISAID) database (https:// www. gisaid. org/) have been resolved in this paper. Input features and classification target labels of unique isolates based on SOM cluster analysis and cognitive knowledge mining are also available (Data S8: SupplData8.xlsx). These resources can be integrated into expert decision-making systems to support early contact tracing and global disease surveillance.
Related works
Several studies have dwelled on the characterization of SARS-CoV-2 genome for tracing the evolution, strains, and diversity of the virus. In Tang et al. 9 , for instance, a population genetic analysis of 103 SARS-CoV-2 genomes was performed. Their analysis revealed two dominant types of SARS-CoV-2 namely the L type (~ 70%) and S type (~ 30%). In another study, Stefanelli et al. 7 Application of ML in the combat of COVID-19 has inspired new discoveries as well as improved methods based on experience of previous/related epidemic. Familiar areas of application center around medical imaging, disease tracing, epidemiology modeling and medicine (analysis of protein structure and drug discovery) and virulent nature of the virus. Whereas the processing of input data for informed decision support is necessary, the types of data exploited in the case of SARS-CoV-2 and related pandemic are mainly demographic and/or (control or clinical data) contributed by patients/volunteers around the world. Table 1 presents a summary of works carried out on ML/AI in related areas of application, indicating the objective, number of isolates collected and data source, methods, results/findings, and drawbacks. From the related works, we observe the following: (i) Most of the works explore hybrid tools that combine biotechnology and ML/AI methodologies, which have advanced precision in approach and solution to the pandemic. (ii) While 50% of the works rely on limited genomic evidence, others are mainly simulation studies. (iii) The fulcrum of most of the works revolve around characterization and forecasting with comparative analysis of SARS-CoV-2 evolution, and relationship between it and (other) related viruses. (iv) All the works are silent on the gender dimension. (v) None of these works to the best of our knowledge has engaged the possibility of SARS-CoV-2 sub-strains discovery.
The abundance of repetitive DNA in human genome assembly has introduced huge gap of multi-megabase heterochromatic regions that challenges standard mapping and assembly algorithms. Consequently, the composition of the sequence and potential functions of these regions have largely remained unexplored. Furthermore, existing genome tools cannot readily engage complete genome analysis to predict complex details and reveal hidden patterns, essential to offer explanations to the increased diversity of viral diseases. This work is therefore motivated by the existing gap between scientific knowledge and clinical application. Despite current advancement in state-of-the-art predictions, application of personalized genomics into clinical practice is yet to flourish. By
Results and discussion
The general workflow describing the proposed hybrid computational framework is presented in Fig. 1, and the sequence of steps implementing the workflow is given on Supplementary Table S1. In addition to describing the steps, a visual demonstration of the implementation is incorporated.
Dinucleotide transitions. Averages of dinucleotide transitions of SARS-CoV-2 genomes computed across the various continents are presented in Fig. 2. These transitions are represented as quadrilaterals dissected along its diagonals. Wang et al. 45 found that the SARS-CoV-2 reference genome has 29.94% of A, 32.08% of T, 19.61% of G and 18.37% of C. Hence, the expected dinucleotide transitions proportion is the product of the two nucleotide bases. For instance, the CG dinucleotide in the viral genome is 3.60% (i.e., 19.61% × 18.37%). From this, we arrive at the following computations for the respective dinucleotides/features identified in this study: AA = 8.96%; CC = 3.37%; GG = 3.84%; TT = 10.29%; AC = 5.50%; AG = 5.87%; AT = 9.60%; CG = 3.60%; CT = 5.87%; GT = 6.29%; TG = 6.29%; TC = 5.87%; TA = 9.60%; GC = 3.60%; GA = 5.87%; and CA = 5.50%. Our results corroborate Wang et al. 45 www.nature.com/scientificreports/ Figure 1. Workflow describing the proposed hybrid approach. The workflow begins with the excavation of FASTA files of human SARS-CoV-2 genome sequences from GISAID. These files were stripped and processed into a genome database (DB) as multiple columns of nucleotide sequence. AI/ML techniques were then applied to extract knowledge from the genome datasets as follows: Using ML techniques, compute dis(similarities) scores between the various pairs of genome sequences and obtain a genomic tree of highly dis(similar) isolates grouped in the form of a dendrogram/phylogenetic tree. Determine the optimal number of natural clusters-to provide additional knowledge for supervised learning. Separate the viral sub-strains using SOM component planes-for possible transmission pathway/pattern visualization. Perform nucleotide alignment of the entire genome sequences (owing to varying sequence lengths of the different genome isolates, a cutoff at the last nucleotide of the genome isolates or the reference genome serves as the maximum pair for comparison), remove duplicate columns while imposing a similarity threshold-to yield unique genome sequences. Extract genome features by computing dinucleotide transitions and mutation frequencies. Generate cognitive map-for intelligent sub-strains prediction. Label classification targets of extracted features using derived SOM clusters and cognitive map. Learn and predict new/emerging sub-strains using ANN with k-fold validation method. Nucleotide mutations. Mutations in base pairs are important for understanding the pathogenicity of SARS-CoV-2. These computations were compiled after direct pairwise comparisons with the reference genome, averaged across the various continents, to produce Fig. 3. As expected, changes in base pairs were observed after pairwise comparisons. Also, genome sequences with very negligeable changes or (no significant mutations) from the reference genome were noticed across the various continents for male and female isolates (see Table 3). Overall, total insignificant mutants of 587, representing 14.98% of the total number of isolates was observed for male patients, while female patients showed 258 insignificant mutants, representing 9.06% of the total number of isolates.
Average nucleotide mutations variant. In an analysis of SARS-CoV-2 mutations in the United States, CT mutant variants were found to have strong gender dependence 22 . Observed mutation variants between male and female isolates (M-F) computed from Fig. 3, across the various continents are shown in Table 4. Positive values indicate male frequency dominance while negative values indicate female frequency dominance. Hierarchical clustering analysis (agglomerative nesting: AGNES). Li et al. 46 investigated the angiotensin-converting enzyme 2 (ACE2)-the receptor agent for the SARS-CoV-2 virus-a known contributor to viral infections susceptibility and/or resistance 47 . ACE2 generates small proteins by cutting up larger protein angiotensinogen, in turn affecting the nucleotide/protein. They compared ACE2 expression levels across 31 normal human tissues between males and females and between younger and older persons using two-sided student's t-test. By examining the expression patterns, they found that protein expression levels were similarly expressed between males and females or between younger and older persons in experimented tissues. Furthermore, men showed worse prognosis than women. Their findings however lacked experimental and clinical data validation.
Using clinical evidence, we provide results of hierarchical clustering analysis to examine the arrangement of the nucleotide (protein) sequences/clusters across the entire genome through mutant accumulation, for male and female patients. Three distance measures were experimented, the ward, complete and single methods. The ward method had the highest agglomerative coefficient of (male = 0.9746; female = 0.9683), indicating more compact clusters; closely followed by complete (male = 0.9579; female = 0.9523); average (male = 0.9423; female = 0.9445); and single (male = 0.8710; female = 0.9058) methods.
To determine if differences exist in the genome sequences between genders, an independent t-test was imposed on the AGNES dis(similarity) scores. Results showed that male patients had statistically insignificantly longer genome sequences (0.9726 ± 0.0377) compared to female patients (0.9673 ± 0.0344), t(3280) = 1.710 , p = 0.0871 . However, there was no statistically significant difference in mean similarity between the nucleotide (protein) structures of the two groups at 95% confidence interval, hence, no significant genetic variations were observed. This result therefore corroborates the findings in Li et al. 43 and validates the claim that no significant genetic variation exists in human SARS-CoV-2 genomes for both groups.
Genome pattern analysis. Component planes reveal the distribution of single feature values on a SOM map. They permit an investigation of continents that share similar variant(s) or sub-strain(s) of SARS-CoV-2 and which variant permeates the different regions. Each component plane expresses the genome pattern of an isolate, where similar nucleotides are placed closely together in a 2D space. Hence, the patterns are established based on accumulation of nucleotides rather than individual nucleotide changes. To account for the variability in SOM neighborhood structure at every SOM run, the reference genome was included as part of the experiment datasets during each training phase. Hence, 4 reference genome pattern possibilities were generated to establish the very topology suitable for the trained datasets.
Our topologies possess random (but controllable) discontinuities that permit more flexible self-organization with high-dimensional data, thus, preserving as much as possible, the map structure. The SOM training was Asia 4 1 2 4 2 3 3 0 3 3 4 2 4 2 2 2 Europe 2 0 0 1 1 1 1 0 1 1 1 1 1 0 0 1 North America 9 2 2 10 5 5 7 1 6 5 6 4 7 3 5 5 South 3 Oceania 7 1 2 1 1 2 2 3 2 3 1 4 2 Table 5. Cluster 1 represents the reference genome. Clusters 2, 3, 4, 5 and 6 are inter-continent pattern clusters or sub-strain(s). Cluster 7 indicates discovered intra-country pattern clusters or sub-strains. The analysis of Wang's et al. 22 suggests the presence of four sub-strains in the United States. Our results therefore sustain an increase in sub-strains within the various continents and offer explanations for the growing concerns and next wave(s) of the virus. www.nature.com/scientificreports/ A distribution of discovered clusters (7 in this case) by gender, across the various continents under study, is presented on Table 5. Notice that cluster 7 has the highest proportion of data points, indicating increased intra-country transmissions; save North America, where cluster 3 has the highest proportion of data points, an indication of increased inter-country transmissions. A further analysis across the continents reveals that the African, Asian, and South American isolates clustered around sub-strains G1, G2 and G5 (where G represents a generic/general sub-strain) with number of isolates and cluster proportions for male and female patients distributed as follows: Due to paucity of data, the Oceanian isolates have data for only cluster 1: 2 (M = 24.86%, F = 18.95%). Table 6 summarizes the clusters distribution, by gender across the various continents.
Cognitive knowledge generation. While mutations are expected, there is need to initiate robust surveillance mechanism for continuous monitoring of public health implications and rapid response to new strains of COVID-19. To intelligently predict the viral sub-strains for both genders, novel cognitive maps that preserves chains of similar isolates were generated from the SOM component planes using the Python programming lan- www.nature.com/scientificreports/ guage. The extracted clusters are necessary for supervised labeling of the classification targets. By disassembling the SOM correlation hunting matrix space, we attribute these associations to disparate classes of discovered viral sub-strains. The outcome are cognitive maps with 7 clusters simulating the discovered SOM patterns and countries/isolates linked to these patterns for male and female patients (Supplementary Table S3). Each sub-strain cluster holds similar isolates that belong to a related pattern bounded by certain degree of association or correlation range, established by the SOM, and captures all isolates discovered within this range. We also captured from the SOM component planes any progression in patterns showing sub-strain(s) development leading to well separated cluster image(s). The cognitive knowledge would assist early contact tracing of cases in emerging disease situations as well as establish how the reference genome has evolved over time. This additional knowledge also permits further characterization of the viral sub-strains, as our results allow unique SARS-CoV-2 base pairs sequence identification (which do not appear in other viral sub-strains) and could be useful as baselines for designing new primers that permit further insights and examination by experts. where, x i is a nucleotide transition or mutation feature, min(X j ) and max(X j ) are the minimum and maximum means obtained from means of the respective nucleotide transitions or mutations feature dataset. The obtained scaling prevents zero values, hence, yielding an even spread of the datasets. Next, using the k-means algorithm, via Silhouette criterion, 7 cluster groups were assigned to the records. These groups or clusters provided information for relabeling the cluster column of both datasets and constructing the output classification targets for (1) 98.5900 ± 0.7600 0.0500 ± 0.0200 0.0100 ± 0.00 0.9900 ± 0.0300 0.9700 ± 0.0400 1.00 ± 0.00 5 98.5900 ± 0.7600 0.0500 ± 0.0200 0.0100 ± 0.00 0.9900 ± 0.0300 0.9700 ± 0.0400 1.00 ± 0.00 10 98.5900 ± 0.7600 0.0500 ± 0.0200 0.0100 ± 0.00 0.9900 ± 0.0300 0.9700 ± 0.0400 1.00 ± 0.00 15 98.5900 ± 0.7600 0.0500 ± 0.0200 0.0100 ± 0.00 0.9900 ± 0.0300 0.9700 ± 0.0400 1.00 ± 0.00 The performance of the NN model was evaluated on the normalized, labelled datasets, using the following metrics: Classification Accuracy, Root Mean Squared Error (RMSE), Mean Absolute Error, Precision, Recall and Area Under the Curve (AUC). The metric specific result from each dataset compared using paired t-test, depict no statistically significant difference between the male and female features (p > 0.05) at the 0.05 level of significance. Results obtained on Tables 6 and 7 confirm the suitability of ANNs in predicting COVID-19 sub-strains for male and female patients, respectively. Furthermore, perfect accuracies with an AUC of 1 were obtained for k = 3, 5, 10 and 15 folds.
A receiver operation characteristics curve (ROC) windows showing the training, validation, test, and all ROC, with k = 3, 5, 10 and 15, for male and female patients are given in Fig. 10a and b, respectively. The deployed model is helpful for classifying new datasets and for building expert support systems for efficient SARS-CoV-2 sub-strains discrimination.
On Table 8, a summary of important performance metrics extracted from the literature for ANN with or without cross validation method, is presented to enable a comparison of our approach with state-of-the-art. We observe that the proposed approach performed better with very high classification accuracy, precision, and recall rates, indicating good generalization and correct prediction.
Conclusion
AI-based Big Data analytics are offering promising applications through the processing of large and complex datasets. In clinical diagnostics, for instance, image processing and computer vision are revolutionizing imagebased diagnosis. In the field of genetics, large-scale genomic research is poised to improve care through genotype definitions of other organisms. The increased availability of multiscale, multimodal, longitudinal patient datasets has provided exclusive opportunities for individualized medicine by permitting the visualization of different patient dimensions. Although this is widely believed to enhance the performance of predictive algorithms for near-clinical practice, these data are highly unstructured and require further refinements to enable structured access and intelligent features combination.
The future of individualized medicine has however imposed limitations, challenges, and biases, as machine learning models are typically sensitive to selection biases (i.e., under-or over-represented specific patient subgroups in the training cohort, including under-explored ethical considerations), and have contributed to stiffening successful deployment of AI in medical applications, particularly those utilizing human genetics and genome datasets. Although addressing underrepresented data in training datasets can resolve bias, while model retraining can assist in improving performance; confusable symptoms relative to the disease have posed a major bottleneck for future applications. This work has created a foundation for future studies on emerging infectious diseases by investigating the variation and functions of SARS-CoV-2 genomes for possible discovery of patterns exhibited by human isolates. A novel taxonomy was created to permit intelligent features mining. The case of symptomatic and asymptomatic patients also presents inconsistencies and is inconclusive in this paper. This aspect of infectious disease demands further research efforts on prompt detection of asymptomatic cases. A major limitation of this research is that some SOM pattern clusters were still confused and demands a defuzzification of these clusters using robust neuro-fuzzification tools.
Methods
Data source and genome sequences selection. Publicly available datasets of coronavirus cases around the globe deposited between December 2019 and January 15, 2021 were excavated from GISAID (https:// gisaid. org-a database of SARS-CoV-2 partial and complete genome compilations distributed by clinicians and researchers, the world over). Using the EpiCoV query interface of GISAID, complete genome sequences with patient status information (gender and age) were filtered. We observed that not all the excavated isolates met this criterion. Hence, out of about 70,000 entries, 8864 isolates (5130 male samples, and 3734 female samples) from different countries of the world contained at least the gender information, and were collected and processed, across 6 continents, Antarctica exempt (as no deposit of SARS-CoV-2 data was found as at the time of excavation). Age range of 1 month and 107 years were collected. Complete genome lengths of above 29,000 bp with < 1% undefined or ambiguous bases ('N's) or with high coverage unambiguous bases or nucleotides, were 98.5900 ± 0.7600 0.0500 ± 0.0100 0.00 ± 0.00 0.9900 ± 0.0100 1.00 ± 0.01 1.00 ± 0.00 5 98.6100 ± 0.7000 0.0500 ± 0.0100 0.0100 ± 0.00 0.9900 ± 0.0300 1.00 ± 0.01 1.00 ± 0.00 10 98.6100 ± 0.7000 0.0500 ± 0.0100 0.00 ± 0.00 0.9900 ± 0.0100 1.00 ± 0.01 1.00 ± 0.00 15 98.6100 ± 0.7000 0.0500 ± 0.0100 0.00 ± 0.00 0.9900 ± 0.0100 1.00 ± 0.01 1.00 ± 0.00 Table 9 documents the continent, isolate distribution by country, isolate distribution by gender, and total isolates excavated. Metadata on the extracted genome sequences consisting of the following columns (Isolate Code: Country + isolate number, Country, Accession Number, Gender, Age, Status, Specimen source and Additional Fast-all (FASTA) files of the genome isolates can be located at GISAID using the Accession Number. Specimen sources include swabs (nasal, oral, throat, nasal and oral); fluids (bronchoalveolar lavage, saliva, sputum, stool) and unknown. We observed that the GSAID database was inconsistent in rendering the patient status, as numerous incoherent annotations introduced inherent redundancy. To assist efficient documentation and processing of data, a taxonomy re-classifying the patient status is given in Fig. 11. This taxonomy subsumes the incoherent annotations (annotations in square text boxes) into unique specifications (annotations in oval shapes), for intelligent data mining 48 .
Scientific Reports
The presence of ambiguous nucleotides may potentially mask the genomic signature encoded within nucleotide frequencies. Although sequencing errors in the form of ambiguous nucleotides (e.g., strings of letter "N") were noticed in the datasets, the affected nucleotide positions were ignored during preprocessing, such that the nucleotide positions maintained their current position and did not shift. A total genome sequence size of ( 8864 × 29000 − 8864 × 30165)bps = (257, 056, 000 − 267, 382, 560)bps was excavated, processed, and stored in comma separated value (CSV) file. Table 10 documents patient status statistics for symptomatic and asymptomatic cases. As observed, there are more hospitalized cases (7580) compared to non-hospitalized cases (391), with more male patients, hospitalized (M = 4318, F = 3262). Furthermore, more males died of COVID-19 than females (M = 541, F = 248). Asymptomatic cases however represent (37/5130; 0.72%) and (41/3734; 1.10%) of the total male and female isolates, respectively.
Configuration of computing device. An HP laptop 15-bs1xx with up to 1 TB storage running on Windows 10 Pro Version 10.018326 Build 18,362 was used for processing the excavated genome sequences, algorithms/programs, and other ancillary data. The system has an installed memory (RAM) of 16 GB with the following processor configuration: 1.60 GHz, 1801 MHz, 4 Core(s) and 8 logical processors. Although our system performed satisfactorily and produced the desired results, higher system configurations would improve the computational speedup.
Hierarchical agglomerative clustering (HAC). The dataset is configured with observations (nucleotides) represented in rows, while columns are variables (genome sequences ordered by countries). The number of columns corresponds to selected countries while the sequences have varying lengths. The data table is further converted into as.matrix format where all values of raster layers objects have columns for each layer and rows for each cells with numeric (continuous) values. In order to make the variables comparable through the elimination of arbitrary variable units, they are transformed (standardized) such that they have mean of zero and standard deviation of unity 49 , using Eq. (2).
where sd(x) represents the standard deviation of the feature values.
The procedure for implementing the HAC are as follows: Compute all the pairwise similarities (distances) between observations in the dataset and represent the result as a matrix. The resultant matrix is square and symmetric with diagonal members defined as unity-the measure of similarity between an element and itself. The matrix elements are computed by iterating over each element and calculating its (dis)similarity to every other element. Suppose A is a similarity matrix of size N × N , and B , a set of N elements. A ij is the similarity between elements B i and B j using a specified criterion (Euclidean distance, squared Euclidean distance, manhattan distance, maximum distance, Mahalanobis distance, cosine similarity). The selected criterion however depends on Table 9. Distribution of excavated isolates. Oceania Guam (2), New Zealand (2), Australia (14) 12 6 18 Total: Number of countries excavated per continent: Africa (14), Europe (28), Asia (28), South America (7), North America (8)
HAC visualization.
After computing the distance between every pair of observation point, the result is stored in a distance matrix. Then, (i) every point is put in its own cluster (i.e., the initial number of clusters corresponds to the number of variables); (ii) the closest pairs of points are merged based on the distances from the distance matrix as the number of clusters reduces by 1; (iii) the distance between the new cluster and the previous ones is recomputed and stored in a new distance matrix; (iv) steps (ii) and (iii) are repeated until all the clusters are merged into one single cluster. The distance separating the clusters is specified via linkage methods 49 which includes, complete, average, single, and ward. Complete linkage computes the similarities and uses the maximum distance between clusters for merging while calculating cluster distances and adopting minimum inter-cluster distance merging. The average linkage calculates the average distance between groups of genome sequence before merging; while the total within-cluster variance is minimized with ward's method and the pair of clusters with minimum betweencluster distance are merged. We rely on all the four assessment techniques and adopt the distance measure with the highest agglomerative coefficient for cluster formation. The resultant cluster solution is finally visualized as a tree structure called a dendrogram (or phylogenomic) tree. As the tree is traversed upwards, observations that are similar to each other are combined into branches, which are themselves fused at a higher height. The height of the fusion provided on the vertical axis, indicates the (dis)similarity between two observations. The higher the height of the fusion, the less similar the observations are. Figure 12 show cluster plots and genomic plots generated using the ward minimum variance criterion.
Optimal natural clusters selection. While there are natural structural entities in some datasets that provide information on the number of clusters or classes, others including the dataset containing genome sequences are structured without boundaries. Cluster validation (an unsupervised methodology aimed at unravelling the actual count of clusters that best describes a dataset without any priori class knowledge) is therefore essential. In this paper, three widely used criteria to validate the number of clusters in the genome sequence dataset of these widely used criteria namely, silhouette, elbow 50 , and gap-statistics are discussed. The three criteria aim at minimizing the total intra-cluster variation (total within-cluster sum of square) as given in Eq. (3).
where c k is the kth cluster, and, w(c k ) is the within-cluster variation. The total within-cluster sum of squares (wss) measures the compactness of the clustering solution. The following steps are applied to achieve the optimal clusters: (i) Compute using clustering algorithm (e.g., k-means clustering) for different values of k ; by varying k for a range of cluster values. (ii) For each k , calculate wss. (iii) plot the curve of wss according to the number of clusters k . (iv) the location of a bend (knee) in the plot is generally considered as an indicator of the appropriate number of clusters. Silhouette criterion is used to validate the clustering solution using pair-wise difference between the within-cluster distances, and by maximizing the value of this index to arrive at the optimal cluster number 51 . Elbow criterion plots the variance resulting from plotting the explained variation as a function of the number of clusters and picking the elbow of the curve as the number of clusters to use. Gap-statistics compares the total intra-cluster variation for different values of k with their expected values under null reference distribution of the data. The reference dataset is generated using Monte Carlo simulations of the sampling process.
Nucleotide mutation frequency. Several techniques for biological sequence alignment (multiple or pairwise) have flourished the literature 54 and are continually being refined, but most of these techniques suffer from the lack of accuracy and partial interpretations. A direct pairwise alignment of each nucleotide with the reference genome was achieved by computing the recurrence of mutated nucleotides down the sequence line. For this study, the sequence of established SARS-CoV-2 reference genome (NC_045512; 29903 bp) sequenced in December 2019 was used. Suppose n represents the total length of a genome; By permitting a single sliding iteration window, a mutation may be any of the following pair: If we denote the frequency of the ith nucleotide pair as p i , then, genomic sequence pairs with 12-dimensional feature vector in the form of Eq. (7) are possible, Unsupervised genome clustering. Several mathematical techniques have been deployed for identifying underlying patterns in complex data. These techniques, which cluster data points differently in multidimensional space are important to discover fundamental patterns of gene expression inherent in data. The clustering technique adopted in this paper is the SOM and has been used extensively in the field of bioinformatics, for visual inspection of biological processes, genes pattern expressions-as maps of (input) component planes analysis. SOM is a neural-network that projects data into a low-dimensional space 55 , by accepting a set of input data and then mapping the data onto neurons of a 2D grid (see Fig. 13). The SOM algorithm locates a winning neuron, its adjusting weights, and neighboring neurons. Using an unsupervised, competitive learning process, SOMs produce a low-dimensional, discretized representation of the input space of training samples, known as the feature map. During training, weights of the winning neuron and neurons in a predefined neighborhood are adjusted towards the input vector using Eq. where L is the total number of neurons in the network. The input nodes have p features, and the output nodes, q prototypes, with each prototype connected to all features. The weight vector of the connections consumes the prototype of each neuron and has same dimension as the input vector. SOMs differ from other artificial neural networks as they apply competitive learning, against error correction learning such as backpropagation, and the fact that they preserve the topological properties of the input space using a neighborhood function. where r is the learning rate and f (i, q) is the neighborhood function, with value 1 at the winning neuron q ; and decreases as the distance between i and q increases. At the end, the principal features of the input data are retained, hence, making SOM a dimension reduction technique. The batch unsupervised weight/bias algorithm of MATLAB (trainbu) with mean squared error (MSE) performance evaluation, was adopted to drive the proposed SOM. This algorithm trains a network with weight and bias learning rules using batch updates. The training was carried out in two phases: a rough training with large (initial) neighborhood radius and large (initial) learning rate, followed by a finetuned training phase with smaller radius and learning rate. The rough training phase can span any number of iterations depending on the capacity of the processing device. In this paper, we kept the number of iterations at 200 with initial and final neighborhood radius of 5 and 2, respectively, in addition to a learning rate in the range of 0.5 and 0.1. The fine training phase also had a maximum of 200 epochs, and a fixed learning rate of 0.2. Selection of best centroids of the genome feature within each cluster was based on the Euclidean distance criterion. The algorithm configures output vectors into a topological presentation of the original multi-dimensional data, producing a SOM in which individuals with similar features are mapped to the same map unit or nearby units, thereby creating smooth transition of related genome sequences to unrelated genome sequences over the entire map.
Genome sequence transformation and low similarity profile selection. Each genome sequence is mapped into an equivalent genomic signal (a discrete numeric sequence) using the following individual nucleotide encoding (i.e., A = 1; C = 2; G = 3; T = 4). Nucleotide pairs above 29,000 bp is maintained in this paper as base input vector, indicating approximate (maximum) length of DNA sequences of the raw SARS-CoV-2 genome. Next, repeated sequences are removed using a Microsoft Excel macro that deletes duplicate columns. A Microsoft Excel macro implementing this process is found on Supplementary Table S2 Cognitive knowledge extraction. Knowledge mining has served huge benefits for quick learning from big data. We apply Natural Language Processing of the genome datasets to extract knowledge of similar strains of the virus. A simple iteration technique is imposed on the SOM isolates ( i = 1, 2, 3, . . . , n) , where n is the maximum number of isolates, as follows: For each isolate pattern, compile similar patterns with the rest of the isolates (i.e., i + 1, i + 2, . . . , n) . Concatenate compiled isolate(s) into a list ( j 1 , j 2 ,…, j m ) where j is an element of the list. Dump the compiled list into CogMap(k i ∈ j 1 , j 2 ,…, j m ) . As the distance matrix is extremely high-dimensional, suitable representative sequences of the isolate clusters are decoupled into a cognitive map for labeling of the classification targets.
Neural network design. Although five core Artificial Neural Networks (ANN) areas have been explored, namely: Multi-Layer Perceptron, Radial Basis Network, Recurrent Neural Networks, Generative Adversarial Networks, and Convolutional Neural Networks; this paper adopts the Multi-Layer Perceptron model (MLP)a class of feedforward ANNs, with at least three layers of nodes: an input layer, a hidden layer, and an output layer (Fig. 14). Except for the input nodes, each node is a neuron that uses a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training. Our output classes or classification targets (C1-C7) are derived from pattern clusters discovered from learning the SOM. A k-fold cross-validation method is adopted to divide the data into k parts. At each iteration i , the ith fold is used for testing, while the other folds are used for training. In this paper, the number of groups is split (into k parts) such that each data sample spans 3, 5, 10 and 15 yielding 60, 100, 200 and 300 calls, respectively, on the training and testing mode of each dataset. The k-fold cross validation method is known to estimate the robustness of the model on new data and is used to drive the validation phase of the NN. As the model is fit on training data, a more realistic estimate of how well the model prediction will work on new cases is obtained. In the current experimental setup, twenty (20) runs of stratified k-fold cross validation 57 is performed on the male and female datasets using a Neural Network (NN) model developed in the MATLAB2017b. Figure 14. ANN architecture. A 3-layered network, with one output layer and one hidden layer. The input layer consumes the knowledge-enriched genome datasets comprising of extracted patterns of SOM learning of the respective genome isolates and additional knowledge sieved from analysis of the genome sequences (i.e., number of natural clusters discovered from the genomic tree, discovered SOM sub-strain clusters, and link sequences derived from cognitive maps of the various isolates). | 9,735.6 | 2020-10-08T00:00:00.000 | [
"Computer Science"
] |
Reducing phenolic off-flavors through CRISPR-based gene editing of the FDC1 gene in Saccharomyces cerevisiae x Saccharomyces eubayanus hybrid lager beer yeasts
Today’s beer market is challenged by a decreasing consumption of traditional beer styles and an increasing consumption of specialty beers. In particular, lager-type beers (pilsner), characterized by their refreshing and unique aroma and taste, yet very uniform, struggle with their sales. The development of novel variants of the common lager yeast, the interspecific hybrid Saccharomyces pastorianus, has been proposed as a possible solution to address the need of product diversification in lager beers. Previous efforts to generate new lager yeasts through hybridization of the ancestral parental species (S. cerevisiae and S. eubayanus) yielded strains with an aromatic profile distinct from the natural biodiversity. Unfortunately, next to the desired properties, these novel yeasts also inherited unwanted characteristics. Most notably is their phenolic off-flavor (POF) production, which hampers their direct application in the industrial production processes. Here, we describe a CRISPR-based gene editing strategy that allows the systematic and meticulous introduction of a natural occurring mutation in the FDC1 gene of genetically complex industrial S. cerevisiae strains, S. eubayanus yeasts and interspecific hybrids. The resulting cisgenic POF- variants show great potential for industrial application and diversifying the current lager beer portfolio.
Introduction
Although interspecific hybridization (i.e. hybridization between two different species) is believed to be rare in nature, next generation sequencing recently revealed the presence of several interspecific hybrid yeasts within the Saccharomyces yeast clade [1][2][3]. While some hybrids PLOS ONE | https://doi.org/10.1371/journal.pone.0209124 January 9, 2019 1 / 20 a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 have occasionally been isolated from natural habitats, most isolates were obtained from manmade industrial environments [4,5]. The best-known and most studied example of such interspecific hybrid is Saccharomyces pastorianus, the yeast species used for lager beer production. This hybrid yeast originated from a cross between the commonly used ale beer yeast Saccharomyces cerevisiae and the cold-tolerant species Saccharomyces eubayanus [6][7][8]. The resulting S. pastorianus hybrid combines the ability of S. cerevisiae to efficiently ferment sugars in beer wort with the cold-tolerance of S. eubayanus, making it the ideal yeast to perform lager beer fermentations which are typically performed at lower temperatures [6,9,10]. Besides S. pastorianus, other hybrid types are associated with industrial environments, such as S. cerevisiae and S. kudriavzevii hybrids (isolated from ale beer and wine fermentations), and S. cerevisiae X S. uvarum hybrids, which are sometimes isolated from wine and beer fermentations [1,[11][12][13].
The discovery of an increasing number of interspecific hybrids in industrial fermentation processes inspired researchers to mimic the hybridization in the lab, often with the aim of generating new variants that would expand the existing spectrum of industrial yeasts [8]. Hybridization between different strains of S. cerevisiae strains has proven an effective method to generate new variants with interesting industrial properties [14][15][16], and the ability to include non-cerevisiae strains in the breeding schemes further broadens the gene pool and thus the phenotypic diversity of the resulting hybrids. Over the past years, multiple reports describe hybridization between S. cerevisiae strains and S. uvarum [17][18][19], S. eubayanus [19][20][21][22][23][24], S. kudriavzevii [25,26] or S. arboricola [19]. The newly generated interspecific hybrids often show hybrid vigor in their fermentative capacity, broadened temperature tolerance and/or diversified metabolite and aroma profiles. Interspecific hybrids therefore open new routes to address changes in the global fermented beverage market, including the increasing demand for low alcohol, high-flavor and unique products [8,27].
While newly formed interspecific hybrids combine interesting characteristics of their respective parental species, they can also inherit undesired phenotypes that impede their direct implementation in industrial production processes. More specifically, the non-cerevisiae strains are less adapted to industrial fermentations and as a result they often perform poorly in the specific stress conditions imposed by industrial environments. Moreover, the wild yeasts often produce certain undesirable aroma compounds. Arguably the most important drawback of the wild species is the production of phenolic off-flavors (POF), most notably 4-vinyl guaiacol (4VG) [19,21,23]. In beer, 4VG is usually an undesirable yeast metabolite that imposes a very distinct spicy, clove-like flavor. It is produced by yeast through the bioconversion of ferulic acid, present in the endosperm of the malt and barley, to its decarboxylated derivative, 4VG [28,29]. The genetic underpinnings of this phenotype are well-described, and involve the action of Fdc1p and Pad1p [28,29]. The first decarboxylates ferulic acid, while the latter provides a prenylated flavin-mononucleotide (FMN) cofactor of Fdc1p, required for its function.
Several successful strategies have been described to obtain artificial interspecific yeast hybrids that are POF -. First, after the hybrid has been formed, an additional step consisting of a backcross to the POF -S. cerevisiae parent effectively removes the phenotype. However, newly formed interspecific hybrids are usually sterile due to the postzygotic barriers between members of the Saccharomyces clade, which limit interspecific hybrids to a vegetative lifestyle [14,30]. Interestingly though, recent publications have found a way to circumvent this hybrid sterility. A first approach is the generation of allotetraploid interspecific hybrids via rare mating of a diploid S. cerevisiae strain with a diploid non-cerevisiae yeast cell [18,20,31]. The resulting allotetraploid interspecific hybrids can form viable allodiploid spores which could be used for backcrossing with spores of its POFparental strain or with another yeast. This approach allowed Krogerus and coworkers to generate a POFinterspecific yeast hybrid, combining genetic material of three parental strains [20]. Nevertheless, this approach also has some major drawbacks. It relies on a rather complex breeding scheme, on the generation of auxotrophic mutants of the candidate parental yeasts and on two consecutive rounds of breeding that require extensive screening of the segregants.
A second approach involves direct modification of the non-cerevisiae parent to eliminate the POF phenotype using mutagenesis [32]. Segregants of the S. eubayanus parental strain are subjected to UV mutagenesis and subsequently screened to identify POFmutants that can be directly applied in a breeding scheme. However, this strategy also has substantial limitations, most notably the large screenings required to identify positive mutants and the risk of off-target mutations with potential undesired phenotypic effects. Despite the availability of a highthroughput screening method for POF production [33], identification of a POFmutant with no or a very limited number of mutations in other genes would require a screening setup with an even higher throughput.
The Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) and CRISPR associated protein (Cas9)-based genome editing technology offers a new tool to modify phenotypes of industrial Saccharomyces yeasts [34,35]. For example, a CRISPR-based genome editing strategy successfully reduced urea production in wine yeasts [36] and introduced the hops monoterpenes biosynthesis pathway in an ale brewing yeast, yielding yeast cells that produce certain hop aromas [37]. Moreover, a recent publication describes an optimized CRISPR strategy to alter the genome of the industrial S. pastorianus yeasts CBS1483 and W34/70, which allows to efficiently alter specific phenotypes such as ester production, by knocking out the responsible genes [38].
While CRISPR-based technologies demonstrate a broad spectrum of potential applications, the legislation surrounding the use of cisgenic gene-edited organisms differs widely between different countries across the planet. A recent EU ruling states that these organisms should follow the same guidelines as other genetically-modified organisms. By contrast, other countries, including Brazil, USA, Japan and Argentina, have installed specific guidelines for the use of CRISPR-based gene editing that in some cases allow such modified organisms to be used without following the GM legislation, which greatly increases their industrial potential [39][40][41].
In this study, we report a CRISPR-based gene editing strategy to develop cisgenic POFvariants of genetically complex industrial yeasts and interspecific hybrids by introducing a naturally occurring loss-of-function mutation in the FDC1 gene. We applied and optimized this strategy for S. cerevisiae strains with varying ploidy, a non-cerevisiae species (S. eubayanus), and newly developed interspecific lager yeasts. We show that our strategy allows the introduction of homozygous mutations, resulting in cisgenic mutants that lost the ability to produce POF without undesirable side effects.
Yeast strains used in this study
Yeast parental strains for the generation of interspecific hybrids were selected from a collection of 301 industrial and wild Saccharomyces yeasts, previously characterized by Steensels and coworkers [16]. S. cerevisiae strains SA003 and BE011 were selected based on their POF pheno-and genotype, as well as for their ability to form viable spores [16,42]. Additionally, S. eubayanus strains WL2022 (NPCC1286 [43]) and WL024 (NPCC1292 [43]) were selected based on their temperature tolerance and sporulation efficiency and viability. S. cerevisiae strains BE002, BE014, BE020 and BE074 were selected as candidate strains to test the proposed CRISPR-based genome editing strategy in industrially relevant S. cerevisiae strains. An overview of the used yeast strains is given in Table 1.
Interspecific hybrid generation through spore to spore mating and hybridstate confirmation
Interspecific yeast hybrids were generated by the previously described spore to spore mating technique [23]. First, parental strains were subjected to a random spore isolation protocol [15]. Next, a micromanipulator (MSM-singer instruments) was used to pair two spores on a YPD2% agar plate (2%[wt vol -1 ] Bacto peptone,1%[wt vol -1 ] yeast extract, 1.5%[wt vol -1 ] and 2%[wt vol -1 ] glucose), one from each parental strain. After six to eight hours of incubation at room temperature, the formation of a zygote ('Shmoo') was investigated. Possible hybrids were further purified by restreaking (3x) the strains on synthetic 12˚P wort medium (Light ). PCR-confirmed interspecific hybrids were prolonged stored at -80˚C to ensure strain purity. Afterwards, generated hybrids were genetically stabilized according to the previously reported stabilization protocol [23].
CRISPR-Cas9 based gene editing
Description plasmid. A S. cerevisiae compatible version of the Candida albicans solo vector CRISPR system, previously described by Vyas and coworkers [45], was used as a platform for the CRISPR-based gene editing [46]. The S. cerevisiae and C. albicans codon optimized Cas9 endonuclease (CaCas9, where the use of the 'CUG' leucine codon is avoided, which is predominantly translated as serine by CTG clade species like C. albicans and S. cerevisiae [45]), as well as the single guide (sg)RNA were introduced into the yeast shuttle vector pR5416, which provides a CEN/ARS sequence for plasmid maintenance in yeast [47]. Both the CaCas9 and the sgRNA are preceded by a constitutive promoter (respectively TEF1p and SNR52p). Species specific sgRNA sequences (S. cerevisiae 5-GGCAAGTACTTACAAACGTA-3'; S. eubayanus 5'-GGCAAGTATTTGCAAACGTA-3') were cloned into the vector as described previously [45].
Repair templates. Double stranded (ds) DNA oligos were created as repair template for the homology directed repair of the induced DSB. Each of the repair templates are 100 nucleotides long, and are centered around the induced DSB in the yeast DNA and contained desired mutations in its sequence. The 100 nucleotides long repair templates were generated via a PCR fill-in of two 60bp long primers with an overlap of 20 nucleotides at their three prime end (primer sequence, see S10 Table).
Transformation. A standard lithium acetate-based yeast transformation protocol was used to transform both the CRISPR plasmid, as well as the repair template into the target strains [33]. Firstly, yeast was grown for one overnight in 5 mL YPD2% growth medium at 30˚C, 200 rpm, after which 1 mL of the pregrowth was transferred to 50 mL YPD2% growth medium and incubated for an extra 4hours (30˚C, 200 rpm). Next, the yeast cell culture was centrifuged (3 minutes at 3000 rpm) and cells were resuspended into 200 μL 0.1 M lithium acetate solution. After 10 minutes incubation at room temperature, 50 μL of the cell culture was mixed with 500 ng plasmid, in which the corresponding sgRNA was cloned and 5 to 25 μg (adjusted protocol) repair template DNA, 300 μL PLI (142 M Polyethylene glycol, 0.12 M lithium acetate, 0.01 M Tris (pH7.5) and 0.001M EDTA) and 5 μL salmon sperm DNA (1mg. mL -1 ) and incubated for 30 minutes at 42˚C. Cells were centrifuged (3 minutes at 3000 rpm) and resuspended in fresh YPD2%, after which cells were recuperated for one overnight on YPD2% agar plates at 30˚C. Selection of plasmid-containing cells was done via replica plating onto selective agar plates (YPD2% + clonat (0.2 μg.mL -1 ), followed by a 2 day incubation at 30˚C. Growing colonies were subjected to a second round of selection on YPD2% + clonat agar plates or immediately plated on YPD2% agar plates (adjusted protocol). After selection, strains were grown for three consecutive rounds onto YPD2% agar plates in order to induce plasmid loss, prior to long term storage at -80˚C.
Introduction of the correct mutation was determined via Sanger sequencing with speciesspecific primers (see S10 Table for an overview of used primers).
Absorbance-based POF measurement
The ability of the yeasts to produce POF was tested via the absorbance based detection method, described previously [33]. Yeasts were inoculated in 150 μL liquid YPD2% growth medium, supplemented with 100 mg.L -1 ferulic acid in a 96 well plate. In each plate, a POF -(W34/70) and a POF + (S. mikatae NCYC2888) control were included. 96-well plates were sealed with an aluminum sticker and incubated for 5 days at 30˚C, 200 rpm. After centrifugation (3 min, 3000 rpm), 100 μL of the supernatant was transferred to a new 96 well plate and remaining concentration of ferulic acid was measured at a wavelength of 325 nm (Tecan Infinite 200 PRO, Switzerland). Yeasts were regarded as POFif the absorbance at 325 nm was above the lower limit of the 90% confidence interval of the POFcontrol (W34/70).
Cell concentration of the pregrowths was measured (BioRad, TC20 automated cell counter, USA), and the calculated amount of cells was used to inoculate 150 mL of an 16˚P wort (17% [wt.vol -1 ] Light spray malt extract, Brewferm, Belgium, supplemented with 0.005 mg.L -1 zn 2+ , autoclaved for 10 minutes at 110˚C) to a starting concentration of 1.5 X 10 7 cells.mL -1 ).
The 250 mL bottles were equipped with a water lock and stirring bar after which they were incubated at 14˚C, agitated at 150 rpm. Weight loss was measured on a daily basis to track fermentation kinetics. Fermentations were stopped when the daily weight loss was equal or less than 0.05 g. Next, the fermentations were cooled on ice to prevent evaporation of the volatile compounds, and samples for chromatographic analysis (HS-GC-FID, Shimadzu corporation), ethanol (Alcolyzer beer ME, Anton Paar GmbH), sulfite and glycerol (Gallery Plus Beermaster, Thermo Scientific) measurements were taken. The leftover fermented medium was used for sensory analysis.
Data analysis and representation
All data analyses and visualization were performed in R [48]. Statistical analyses were conducted within the multicomp package (version 1.4-8 [49]). Figures were generated using the ggplot2 package (version 2.2.1 [50]).
Results
Our goal was to develop and test a CRISPR-based genome editing strategy aimed at modifying the POF phenotype of industrial yeasts, including polyploid and aneuploid yeasts as well as interspecific hybrids. More specifically, we aimed at introducing a loss-of-function SNP mutation in the FDC1 gene that occurs naturally in many domesticated industrial ale beer yeasts [42].
Identification of a CRISPR target region to modify the POF phenotype
The first step in CRISPR-based gene editing is finding an appropriate target region and designing the necessary sgRNA sequence [35]. This region needs to meet certain basic requirements. First, the region should harbor a loss-of-function mutation present in the natural biodiversity of Saccharomyces yeasts. Introducing such mutation in the same species or a closely related one is regarded as cisgenic modification, which will favor its industrial applicability. Second, the region should be highly conserved between different strains and even species, ensuring that the strategy works in several yeast strains, species and hybrids. Third, the region should contain a neighboring PAM sequence, essential for the correct identification by the Cas9 endonuclease of its target site. Fourth, the region should also be unique in the genome to avoid offtarget activity.
Analysis of the genome sequences of industrial POF -S. cerevisiae yeasts, as described by Gallone and coworkers [42], indicates that the disruption of either PAD1 or FDC1, inhibits POF production [28,33]. The majority (73.80%) of POFstrains from the 'Beer 1' lineage [42] share a C to T mutation at position 460 of the FDC1 coding sequence, that replaces a glutamine residue (CAA) by a stop codon (UAA) (Fig 1A). This premature stop codon likely disrupts the protein function, as both the dimerization domain and the catalytic pocket of the protein are not formed anymore (S3 Fig) [51]. Importantly, for the other species within the Saccharomyces species complex with a known FDC1 sequence, the same C to T mutation at position 460 of the FDC1 coding sequence introduces a similar stop codon (UAA) (Fig 1B).
Moreover, there is a high degree of conservation in the DNA sequence surrounding this mutation in all 156 previously sequenced S. cerevisiae strains [42]. Indeed, the sequence of the proposed sgRNA sequence is 100% identical for 155 out of the 156 sequenced S. cerevisiae strains. In addition, a PAM sequence (NGG) could be found in the close proximity of this mutation (nine nucleotides downstream of the mutation), which is crucial for the correct guidance of the CaCas9 endonuclease to its target (Fig 1B) [45]. Importantly, the same PAM sequence, situated nine nucleotides downstream of the target mutation site, can be found in the FDC1 sequence of all other sequenced members of the Saccharomyces species complex, allowing the design of a possible sgRNA sequence for these Saccharomyces species (Fig 1B).
Lastly, possible off-target reactions of the CRISPR system were assessed by blasting the newly designed species-specific 20 nucleotide guide sequences plus possible PAM sequence (NGG) against the genome of the 156 previously sequenced S. cerevisiae yeasts [42], as well as against the de-novo assembly of the S. eubayanus genome [53].This analysis shows that the proposed sgRNA sequences are species-and target unique as no other sequences with more than 85% similarity were detected. Moreover, mismatches or gaps in off-target sequences with the highest similarity occur in the first 13 bp immediately upstream of the PAM sequence, which has been shown to be sufficient to achieve a 100% off-target free gene editing in Saccharomyces yeasts [54].
Together, this indicates that the proposed region (Fig 1) is an appropriate candidate for effective cisgenic CRISPR-based engineering of the POF phenotype in pure and hybrid Saccharomyces species.
Evaluation of CRISPR efficiency in euploid S. cerevisiae and non-cerevisiae yeasts
Gene editing of industrial Saccharomyces yeasts is complicated by two main factors. First, many industrial yeasts are poly-and/or aneuploid, and therefore can contain multiple alleles of the target genes which all need to be modified by the CRISPR system. Analysis of the genomes of industrial ale yeasts shows that these strains have an average ploidy level of 3.52, with some yeasts showing a ploidy level above 4. As POF production is a dominant trait, all alleles of FDC1 or PAD1 need to be deactivated to affect the phenotype. Furthermore, CRISPR protocols are generally optimized for lab strains of S. cerevisiae, and their efficiency for editing non-cerevisiae or mixed genomes can be low [35,46,54].
To assess the efficiency of CRISPR-based gene editing in polyploid genomes, we introduced the desired nonsense mutation in FDC1 of different POF + S. cerevisiae strains with different ploidy levels. Besides a lab strain (haploid and diploid S288c), POF + industrial beer yeasts BE014 and BE020 (diploid) and BE002 and BE074 (triploid) were subjected to the CRISPR transformation. These strains contain two (BE014, BE020) or three (BE002, BE074) functional copies of the FDC1 gene, and at least one functional PAD1 gene [42].
Overall, a decrease in the efficiency of CRISPR gene editing was observed with increasing ploidy levels and genome complexity. The haploid and diploid strains showed a high success rate similar to that of the laboratory strains. Introduction of the mutations was observed in [33,42]. Bars indicate the presence and the position of the mutation in Fdc1p based on S. cerevisiae s288C reference sequence. Height of the bars indicate the frequency of the mutation across the full collection of POF -S. cerevisiae strains considered. Distribution of the mutation is further dissected across S. cerevisiae subpopulations (circles-colors) and its proportion within each subpopulation is summarized (circle-size) (exact frequencies are reported in S1 Table). Type and position of the mutation in the coding sequence are annotated on top of each bar. Only POF -S. cerevisiae strains that harbor homozygous loss-of-function mutations were included in the analysis. (B) Natural FDC1 loss-of-function point mutation selected for the CRISPR-Cas9 gene editing procedure (yellow cone). The alignment represents a zoom-in of the targeted region in the coding sequence of FDC1 across six Saccharomyces species (-20nt, +20nt from the targeted point mutation). Colored boxes highlight specific areas of the targeted region: the targeted glutamine codon (CAA) that will be replaced by a stop codon (TAA) (yellow box), the PAM sequence (red box) and the guide RNA (green box). The bar-chart represents the alignment of consensus annotation for each position in the targeted region as calculated by Jalview [52].
To evaluate the efficiency of the strategy in non-cerevisiae species, we introduced the same C to T mutation in the FDC1 gene of two POF + diploid S. eubayanus yeasts (WL022 and WL024) (Fig 1). This gene editing was highly successful, with an efficiency of 100% (3/3 and 1/ 1 respectively) for both strains, yielding POFvariants that may be suitable for industrial application [8,32].
Evaluation of the newly formed mutants revealed that the introduction of the homozygous nonsense mutation effectively abolish the ability to produce the unwanted POF aroma (Fig 2).
Gene editing of interspecific hybrid yeasts can induce loss of chromosomal fragments
Next, the CRISPR-based gene editing was evaluated in interspecific hybrids. Therefore, we first generated POF + hybrids between S. cerevisiae and S. eubayanus, after which we tried to remove their POF production by introducing a nonsense mutation in their FDC1 genes using the developed CRISPR strategy.
Initially, the standard CRISPR-based gene editing strategy [46] was used to introduce the selected SNP mutation in the S. eubayanus derived FDC1 allele of the novel generated interspecific hybrids H1 and H2. The S. cerevisiae parent of H1 (BE011) is POFand its FDC1 carries a homozygous nonsense mutation in FDC1 p.W497 � [42]. The S. cerevisiae parent of H2 (SP003) is also POFand its FDC1 is heterozygous for the nonsense mutation p.Q154 � and homozygous for the insertion T>TA at position p.166 [42]. As these mutations differ from the one targeted in the developed CRISPR-based gene editing strategy, it allows to determine the species-specificity of the designed strategy towards the alleles derived from their S. eubayanus and S. cerevisiae parents.
For the first interspecific hybrid, H1, 30 POFvariants were obtained out of 32 tested (94%). Similarly, H2 yielded 21 POFvariants out of 24 (87.5%) (S1 Fig). However, control PCR reaction (primer pair SS_FWSE and SS_RVSE (S10 Table)), which amplifies 490 bp surrounding the DSB induced in the S. eubayanus derived FDC1 gene, did not yield an amplification product. Interestingly, further genetic characterization of this region revealed that the CRISPR editing induced the loss of the region downstream of the targeted region in the S. eubayanus Chr13 (S2 Fig). Genomes of artificial interspecific hybrids are notoriously unstable, and introduction of a double-stranded break likely caused partial loss of the respective chromosome [9,55,56]. Whereas this genomic rearrangement yields POFvariants, it is undesirable as it might have major unwanted pleiotropic consequences for other phenotypes. Therefore, the strategy was further optimized to eliminate this detrimental effect in interspecific hybrids. Increasing the concentration of repair template from 5 to 25 μg during the transformation stage and shortening the selection step for the presence of the CRISPR plasmid from two rounds to only one round of selection, allowed us to overcome this interspecific hybrid genome instability. This strategy yielded POFvariants (Fig 3), with a 12.9 (H1, 4/31) and 6.84% (H2; 8/117) efficiency. None of these variants showed partial chromosome loss. Moreover, none of these variants acquired the targeted mutation in the S. cerevisiae FDC1 allele (confirmed via Sanger sequencing), showing the specificity of the designed CRISPRbased genome editing strategy.
Gene edited variants do not show phenotypic side effects
To determine whether the gene editing procedure introduced any unwanted side effects to the fermentation performance of the yeasts, we evaluated their performance in lab-scale beer fermentations and compared the profiles to those of their respective wild types (WT).
Overall, profiles of the various gene edited variants were similar to those of their WT ancestral strains, with no significant differences in CO 2 production throughout the fermentation (Fig 4A and S2 Table), with the exception of strain WL022, where the mutant showed a faster fermentation at T2 (Anova-test; P-value <0.001) and T3 (P-value <0.01)). Furthermore, no CRISPR mutants lose their ability to convert ferulic acid to 4VG. Yeast cultures were incubated with an excess of ferulic acid, the precursor for the POF aroma. A decrease in ferulic acid indicates POF aroma formation. Remaining ferulic acid was measured for two control strains (POFcontrol (green); POF + control (red)), S. cerevisiae and S. eubayanus parental strains (dark blue and orange, respectively), Interspecific hybrid H1 and H2 (dark purple) with four of their respective CRISPR variants (light purple). Error bars represent the standard deviation of two biological replicates. The dotted line represents the applied cut off value (lower border of the 90% confidence interval of the POFcontrol).
https://doi.org/10.1371/journal.pone.0209124.g003 At the end of the fermentation, 17 different flavor-active metabolites were evaluated. Apart from the targeted phenotype, namely the production of 4VG (see further), no differences between gene edited variants and the WT were observed (Anova and post-hoc Tukey test; Pvalue > 0.05; S3-S8 Tables), except for S. eubayanus WL024, where a significant difference was measured for one aroma compound, isoamyl alcohol (P-value = 0.004; average isoamyl alcohol production of WL024 is 227.34 mg.L -1 and 253.85 mg.L -1 for WL024_A). Principal component analysis (PCA), based on the production of the remaining 16 metabolites, shows a clear clustering of the different gene-edited variants with their respective WT ancestors. This further confirms that apart from the production of POF aroma, the overall flavor profile of the strains remained unchanged compared to that of the parental WT strains ( Fig 4B).
As expected, all mutants showed a dramatic decrease in 4VG production (Fig 4C, P-values ranging from <0.001 to 0.033; S3-S8 Tables), with the 4VG concentrations remaining below or around the reported flavor threshold of 0.3 mg.L -1 [29].
Subsequent sensorial analysis by a trained panel supported the above-mentioned measurements. Whereas BE014 and BE020 clearly produced clove like off-flavors, the fermentation products obtained with the respective gene edited variants were described as 'very fruity' and 'neutral'. Similarly, mutants of S.eubayanus strains WL022 and WL024 were described as 'slightly fruity' to 'fruity'. H1's aroma was defined as being 'slightly fruity' with 'phenolic' notes. All four gene edited variants of H1 were scored similar by the panel as being 'very fruity', highlighting the masking effect of 4VG on fruitiness. A similar trend was observed for H2. Interestingly, the aromatic contribution of all generated gene edited variants was still remarkably different compared to the aroma produced by the reference lager yeast W34/70, revealing the potential of these hybrids to broaden the aromatic diversity of lager beers, without introducing unwanted clove like phenolic off-flavors.
Discussion
Today's beer market is characterized by an increased demand for niche products and diversification [57]. These market forces have led to an increased interest into novel beer yeasts that can impart new aromas. However, some of the most interesting strains are characterized by the production of the undesirable aroma compound 4VG. Here we describe a new CRISPRbased gene editing strategy that allows to specifically modify 4VG production in various yeasts, including feral S. cerevisiae isolates, non-Saccharomyces strains and interspecific hybrids.
Compared to previous approaches aimed at modifying industrially-relevant phenotypes in yeast [36,38,54], this study introduces a naturally-occurring point mutation, rather than deleting the gene of interest, thereby generating cisgenic variants that are likely exempted from GM regulations within countries like Argentina, USA, Japan and Brazil [40,41,58].
Gene editing efficiency ranged from 100% for lab strains and diploid industrial S. cerevisiae strains to 10% for more complex, poly-and euploid industrial S. cerevisiae yeasts. We also report the first CIRPSR-based engineering of S. eubayanus, which was also highly efficient (100%). The reported gene editing efficiencies are in line with previous reported efficiencies in butyrate, 11 = isoamyl acetate, 12 = ethyl hexanoate, 13 = phenetyl alcohol, 14 = ethyl octanoate, 15 = phenetyl acetate and 16 = ethyl decanoate. (C) 4VG production (measured via HS-GC-FID; mg.L -1 ) of W34/70 (green), S. eubayanus WL022 and WL024 (dark orange) with their respective gene edited variants (light orange), S. cerevisiae BE014 and BE020 (dark blue) with their respective gene edited variants (light blue), as well as for artificial interspecific hybrids H1 and H2 (dark purple) with their gene edited variants (light purple). Error bars represent the standard deviation of two biological replicates. Dotted line represents the flavor threshold of 4VG in beer (0.3mg.L -1 ), as reported in [29].
Although CRISPR-based editing was previously reported to be an efficient route to modify S. pastorianus [38], the Cas9-induced double-stranded DNA break caused an unexpected and undesirable partial chromosome loss [35]. Off-target activity of CRISPR genome editing in haploid or homozygous Saccharomyces yeasts has been shown to be very rare [59]. More recently however, allele specific gene editing in artificial S. cerevisiae X S. eubayanus hybrids was reported to cause loss of heterozygosity (LOH). Specifically, the induced double-stranded break in the S. eubayanus derived locus was not repaired by the provided repair template via homology-directed repair, but rather repaired via LOH [60]. Although further research is needed, the fact that FDC1 is located in the subtelomeric region in S. cerevisiae (Chr IV) and S. euybayanus (Chr13) could favor partial chromosome loss over repair via LOH. One way to reduce such unwanted structural rearrangements is to design the repair template in such a way that the PAM site is inactivated [54]. This prevents continued cutting of the site by the Cas9 endonuclease after successful introduction of the desired mutation [61]. However, this strategy is not ideal for editing organisms that are targeted for food production, as in most cases the resulting mutant would not be cisgenic and thus subjected to GM laws. We therefore modified the gene editing protocol by increasing the amount of repair template and reducing the CRISPR-Cas9 endonuclease activity, which seemed to reduce the unwanted genomic rearrangements and increased repair via HDR. The proposed technique showed a seven to 12% efficiency in specifically introducing a single SNP in the subtelomeric S. eubayanus derived FDC1 gene in novel generated interspecific hybrids.
Overall, some of the generated POFvariants show great potential for industrial application, as their unique aroma profiles are no longer masked by 4VG. Additionally, the CRISPR gene editing strategy described in our study offers a general tool for tuning the characteristics of various aneuploid and non-Saccharomyces yeasts. Specifically, the combination of our gene editing protocol with the ever-increasing number of identified quantitative trait loci (QTL) represents a formidable opportunity to obtain superior industrial yeasts through gene editing [62].
Supporting information S1 Fig. POF phenotype CRISPR variants with standard protocol. Remaining ferulic acid was measured for two control strains (POFcontrol (green); POF + control (red)), Interspecific hybrid one and two (dark green) with their respective POF -(light green) and POF + (dark red) CRISPR variants, obtained via the standard CRISPR protocol. Error bars represent the standard deviation of two biological replicates. Dotted line represents the used cut off value (lower border of the 90% confidence interval of the POFcontrol). (EPS)
S2 Fig. Standard CRISPR protocol causes genetic instability in complex hybrid genomes.
Three primer pairs were designed, each targeting a different part of the S. eubayanus derived Chr13 in the generated interspecific hybrids H1 and H2. Primer pair one and two amplify a region 4kb and 8b upstream of the introduced DBS respectively. Primer pair three amplifies a region 1kb downstream of the introduced DBS on the S. eubayanus derived chromosome of our hybrids (visual representation not depicted to scale). Primer pair three does not yield any of the expected sized PCR product for gene edited variants generated with the standard CRISPR protocol (H1_1 and H2_1) but does for gene edited variants generated with the adjusted CRISPR protocol (H1_A and H2_A), proposed within this manuscript. (EPS) S3 Fig. 3D crystal structure of Fdc1p. Fdc1p acts as a dimer (blue and red form one monomer, cyan and orange, the second one).Blue and cyan parts of the molecules represent the N-terminal part of the protein that it still formed before the stop-gain mutation p.Q154, whilst the red and orange parts disappears when Q154 is replaced by a stop codon [51]. Two 4VG molecules, which are bound to the catalytic pockets of both monomers, are colored in green. . Gel electrophoreses shows both the cerevisiae and eubayanus specific PCR product for hybrids H1 and H2, as well as for S. pastorianus W34/70. The S. cerevisiae control strains WI009, BE001 and SP003 only show the S. cerevisiae specific PCR product, whereas S. eubayanus WL022 only shows the S. eubayanus specific PCR product. (TIF) S1 Table. Overview occurrence of natural loss-of-function mutations in the FDC1 gene across a collection of 76 POF -S. cerevisiae strains. Total frequency is the frequency of the mutations across the complete collection of POF -S. cerevisiae strain collection. Column four to nine show the frequency of each mutation across S. cerevisiae sub-populations, as described earlier [44]. Only POF -S. cerevisiae strains that harbor homozygous loss-of-function mutations were included in the analysis. (PDF) S2 Table. Statistical analysis of the weight loss measured during fermentation between gene edited variants and their respective WT (from time point 1 to end of the fermentation). P-values were obtained using ANOVA. All statistical analysis were conducted in R, with the multcomp package ( � P-value < 0.05; �� P-value <0.01; ��� P-values <0.001). "/" means fermentations were stopped before this time point. (PDF) S3 Table. Statistical analysis of the phenotypic behavior of H1 compared to the H1 geneedited variants. Column two represents the P-values obtained with ANOVA. Column three to twelve represent the obtained P-values of a post-hoc Tukey test. All statistical analyses were conducted in R, within the multcomp package ( � P-value < 0.05; �� P-value <0.01; ��� P-values <0.001). (PDF) S4 Table. Statistical analysis of the phenotypic behavior of H2 compared to the H2 geneedited variants. Column two represents the P-values obtained with ANOVA. Column three to twelve represent the obtained P-values of a post-hoc Tukey test. All statistical analyses were conducted in R, within the multcomp package ( � P-value < 0.05; �� P-value <0.01; ��� P-values <0.001). (PDF) S5 Table. Statistical analysis of the phenotypic behavior of BE014 compared to its geneedited variants. Column two represents the P-values obtained with ANOVA. Column three to twelve represent the obtained P-values of a post-hoc Tukey test. All statistical analyses were conducted in R, within the multcomp package ( � P-value < 0.05; �� P-value <0.01; ��� P-values <0.001). (PDF) S6 Table. Statistical analysis of the phenotypic behavior of BE020 compared to its geneedited variants. Column two represents the P-values obtained with ANOVA. Column three to twelve represent the obtained P-values of a post-hoc Tukey test. All statistical analyses were conducted in R, within the multcomp package ( � P-value < 0.05; �� P-value <0.01; ��� P-values <0.001). (PDF) S7 Table. Statistical analysis of the phenotypic behavior of WL022 compared to its geneedited variant. Column two represents the P-values obtained with ANOVA. Column three to twelve represent the obtained P-values of a post-hoc Tukey test. All statistical analyses were conducted in R, within the multcomp package ( � P-value < 0.05; �� P-value <0.01; ��� P-values <0.001). (PDF) S8 Table. Statistical analysis of the phenotypic behavior of WL024 compared to its geneedited variant. Column two represents the P-values obtained with ANOVA. Column three to twelve represent the obtained P-values of a post-hoc Tukey test. All statistical analyses were conducted in R, within the multcomp package ( � P-value < 0.05; �� P-value <0.01; ��� P-values <0.001). (PDF) S9 Table. Overview aroma and ethanol production from lab scale lager beer fermentation tests. Quantified yeast-related aroma compounds are represented as concentrations (mg.L -1 ), total weight loss as grams (g), ethanol production as volume percentage. Glycerol and SO 2 production are represented as concentrations (g.L -1 and mg.L -1 respectively). H 2 S production capacity is qualitatively indicated (+,+-, -). Lastly, the used score legend for flavors during sensory analysis was: VS = very slightly; S = slightly; V = very; N = neutral; FR = fruity; POF = cloves, phenolic; FRESH = fresh. (PDF) S10 | 8,531.6 | 2019-01-09T00:00:00.000 | [
"Biology"
] |
TMD parton shower effects in associated $\gamma$ + jet production at LHC
We investigate associated prompt photon and hadronic jet production at the LHC energies using the $k_T$-factorization approach. Our consideration is based on the $\mathcal O(\alpha\alpha_s^2)$ off-shell gluon-gluon fusion subprocess $g^*g^*\to \gamma q\bar q$ and several subleading quark-initiated contributions from $\mathcal O(\alpha\alpha_s)$ and $\mathcal O(\alpha\alpha_s^2)$ subprocesses, taken into account in the conventional (collinear) QCD factorization. The transverse momentum dependent (or unintegrated) gluon densities in a proton are derived from Catani-Ciafaloni-Fiorani-Marchesini (CCFM) evolution equation. We achieve reasonably good agreement with the experimental data taken by CMS and ATLAS Collaborations and demonstrate the importance of initial state parton showers for jet determination in the $k_T$-factorization approach.
Motivation
Investigation of prompt photon and associated hadronic jet production is an important topic of modern experimental and theoretical research [1][2][3][4][5][6]. The photons are called prompt, if they originate from the hard partonic subprocess, rather than from secondary decays. Such events provide a direct probe of the hard subprocess dynamics since the produced photons are largely insensitive to the effects of final-state hadronization. The measured γ + jet total and differential cross sections are sensitive to the quark and gluon densities in the proton over the whole kinematical region of longitudinal momentum fraction x and hard scale µ 2 and represents an important background to many processes involving photons in the final state, including Higgs boson production (in diphoton decay mode). Thus, it is essential to have accurate QCD predictions for corresponding cross sections.
The reported measurements [1][2][3][4][5][6] are in agreement with the results of next-to-leadingorder (NLO) perturbative QCD calculations performed using jetphox Monte-Carlo event generator [7]. The leading-order (LO) calculations based on the Monte-Carlo event generator sherpa [8], which incorporates higher-order tree level matrix elements and parton shower modeling, also agree well with the measurements [1][2][3][4][5][6]. An alternative description of γ + jet data can be achieved in the framework of the high-energy QCD factorization [9], or k T -factorization approach [10]. This approach is based on the Balitsky-Fadin-Kuraev-Lipatov (BFKL) [11] or Ciafaloni-Catani-Fiorani-Marchesini (CCFM) [12] gluon evolution equations and has certain technical advantages in the ease of including higher-order QCD radiative corrections (namely, part of NLO + NNLO + ... terms corresponding to real initialstate gluon emissions) that can be taken into account in the form of transverse momentum dependent (TMD, or unintegrated) parton distributions 1 . It has become a widely exploited tool and it is of interest and importance to test it in as many cases as possible.
In the present note we apply the k T -factorization approach to the associated γ + jet production at LHC energies, which continues the line of our previous studies [15][16][17] where we have inspected inclusive photon as well as associated prompt photon (or rather Z boson) and heavy quark jet production. Note that the associated γ + jet production was already examined in the k T -factorization framework [18,19]. In particular, some photon-jet correlations have been studied at the RHIC and Tevatron energies [18]. However, initial state parton showers, which are important for the proper jet determination in the k T -factorization approach, have been not taken into account in those calculations. A simple model [20] to implement the effects of parton showers into analytical calculations results in some difficulties in simultaneous description of photon transverse momentum and rapidity distributions in the whole kinematical range [19]. The importance of parton shower contributions to jet production was pointed out [21] and the method [22] to reconstruct correctly the kinematics of the jets with taking into account TMD parton showers was proposed. The major goal of the present article is to apply the method [22] to associated γ + jet production and improve our previous results [19] using a TMD shower implemented in the Monte-Carlo event generator cascade [23]. Our other goal is the selection of TMD gluon densities in a proton best suited to describe the available experimental data.
The outline of the paper is the following. In Section 2 we briefly describe our approach. In Section 3 we present the results of our calculations and confront them with the available 1 See reviews [13,14] for more information.
data. Our conclusions are summarised in Section 4.
Theoretical framework
Let us briefly describe the calculation steps. We start from the off-shell gluon fusion subprocess: where the momenta of all particles are given in the parentheses. The corresponding gaugeinvariant off-shell production amplitude was calculated earlier [24,25] and implemented into the Monte-Carlo event generator cascade [23] and newly developed parton-level Monte-Carlo event generator pegasus [26]. All the details of these calculations were explained in [24,25]. We only mention here that evaluation of the off-shell matrix element involves a special gluon polarization sum rule: where is the gluon polarization vector and k T its non-zero transverse momentum. In the collinear limit k 2 T → 0 this expression converges to the ordinary one after averaging over the azimuthal angle. In all other respects the calculations follow the standard QCD Feynman rules.
Following [17], in addition to off-shell gluon-gluon fusion we take into account several subprocesses involving quarks in the initial state, namely: where the momenta of all particles are given in the parentheses. Despite of the fact, that quark densities are typically much lower than the gluon density at LHC conditions, these processes may become important at very large transverse momenta (or, respectively, at large parton longitudinal momentum fraction x, which is needed to produce large p T events) where the quarks are less suppressed or can even dominate over the gluon density. Here we find it reasonable to rely upon collinear Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) factorization scheme [27], which provides better theoretical grounds in the large-x region. So, we consider a combination of two techniques with each of them being used at the kinematic conditions where it is best suitable (gluon induced subprocess (1) at small x and quarkinduced subprocesses (3) -(6) at large x values). Such combined scheme was successfully applied to describe the associated production of prompt photons (or Z bosons) and heavy quark jets at the LHC [16,17]. The calculation of production amplitudes (3) -(6) is a very straightforward and a cross-check of our results has been done using madgraph tool [28]. Note that numerically we keep only valence quarks in (3) to avoid any double counting. The calculations based on another TMD scenario, the Parton Branching (PB) approach [29,30], should include both sea and valence quark contributions. Similarly to conventional DGLAP-based evaluations, in order to describe the data, the PB calculations have to involve a number of additional higher-order subprocesses (such as qg → γqg subprocess) properly matched with leading order terms (see, for example, [31]). The relation between the PB and the CCFM scheme applied here (see below) is out of our present consideration.
It is well-known that photons may also originate from the so-called fragmentation processes of partons produced in the hard interaction. However, an isolation requirement which is applied in the measurements [1][2][3][4][5][6], significantly reduces the rate for these processes: after applying the isolation cuts such contributions amount only to about 10% of the visible cross section. Therefore, below we will neglect the contributions from the fragmentation mechanisms 2 .
As usual, to calculate the contributions of quark-induced subprocesses (3) -(6) one has to convolute the corresponding partonic cross sections dσ ab with the conventional parton distribution functions f a (x, µ 2 ) in a proton: where indices a and b denote quark and/or gluon and x 1 and x 2 are the longitudinal momentum fractions of the colliding protons. In the case of off-shell gluon-gluon fusion (1) we employ the k T -factorization formula: where f g (x, k 2 T , µ 2 ) is the TMD gluon density in a proton. A comprehensive collection of the latter can be found in the tmdlib package [32], which is a C++ library providing a framework and an interface to the different parametrizations. In the present paper we have tested two latest sets (namely, JH'2013 set 1 and JH'2013 set 2) which were obtained [33] from the numerical solution of the CCFM gluon evolution equation. The CCFM equation provides a suitable tool since it smoothly interpolates between the small-x BFKL gluon dynamics and high-x DGLAP one. The input parameters of the initial gluon distribution were fitted from the best description of the precision DIS data on the proton structure functions F 2 (x, Q 2 ) and F c 2 (x, Q 2 ). For the conventional quark and gluon densities we used the MSTW2008 (LO) set [34]. Numerical calculations at the parton level in the k T -factorization approach and collinear QCD factorization were performed using the Monte-Carlo event generator pegasus.
A last important point of our calculations is connected with the proper determination of associated jet four-momentum: the quarks and gluons produced in the hard subprocesses (1), (3) -(6) can form final-state hadronic jets. In addition to that, the produced photon is accompanied by a number of gluons radiated in the course of the non-collinear evolution, which also give rise to final jets. From all of these hadronic jets we choose the one (i.e. leading jet), carrying the largest transverse momentum (and satisfying the experimental cuts) and then compute the cross-section of γ + jet production. Technically, we produce a Les Houche Event file [35] in our parton level calculations performed using the Monte-Carlo event generator pegasus and then process the file with a TMD shower implemented 2 The isolation requirement and additional conditions which preserve our calculations from divergences have been specially discussed in [24].
in cascade, thus fully reconstructing the CCFM evolution. This approach gives us the possibility to take into account the contributions from initial state parton showers in a consistent way and, of course, essentially differs from simple model [20] used in the previous calculations [19]. This model [20] was based on the assumption that the gluon, emitted in the last non-collinear evolution step, compensates the whole transverse momentum of the gluon participating in the hard subprocess. Under this assumption, all the other emitted gluons can be collected together in the proton remnant, which carries only a negligible transverse momentum (see [20] for more information). Concerning the quark-induced subprocesses (3) -(6), calculated in the conventional (collinear) QCD factorization, we used the latest version of pythia package [36] to process the Les Houche Event files generated by pegasus. The jets are reconstructed with the anti-k T algorithm, implemented in the FastJet tool [37].
Numerical results
Throughout this paper, all calculations are based on the following parameter setting. In collinear QCD factorization we use one-loop strong coupling with n f = 4 massless quark flavors and Λ QCD = 200 MeV, the factorization and renormalization scales are both set equal to the produced photon transverse energy, µ R = µ F = E γ T . In the k T -factorization calculations we use a two-loop expression for the strong coupling (as it was originally done in the fit [33]), set µ R = E γ T and define the factorization scale as µ 2 F =ŝ + Q 2 T withŝ and Q T being the subprocess invariant energy and the net transverse momentum of the initial off-shell gluon pair, respectively. Note that the definition of µ F is dictated by the CCFM evolution algorithm [33].
The measurements of associated γ + jet production cross sections have been carried out by the CMS [1,2] and ATLAS [3][4][5][6] Collaborations at LHC energies √ s = 7, 8 and 13 TeV. However, the data [2,5,6] refer to the region of high E γ T (i.e., region of relatively large x ∼ E γ T / √ s), where standard quark-induced subprocesses (3) -(6) dominate. We do not analyse events of this kind in the present study and only concentrate on the small and moderate E γ T data [1,3,4], where off-shell gluon-gluon fusion plays a role (see discussion below). The experimental acceptance, anti-k T algorithm radius R jet and η − φ separation ∆R γ−jet implemented in the experimental analyses [1,3,4] are collected in Table 1. The CMS Collaboration has reported [1] the measurements of triple-differential cross section dσ/dE γ T dη γ dη jet for various configurations of the photon and leading jet at √ s = 7 TeV. In the ATLAS analysis [3], the differential cross section dσ/dE γ T has been measured for three different rapidity ranges of the leading jet: |y jet | < 1.2, 1.2 < |y jet | < 2.8 and 2.8 < |y jet | < 4.4. For each rapidity configuration the same-sign (η γ η jet > 0) and opposite-sign (η γ η jet < 0) cases are studied separately. More recently, the ATLAS Collaboration has presented measurements [4] of γ + jet cross sections as a function of the photon transverse energy E γ T , leading jet transverse momentum p jet T and rapidity y jet at the same energy √ s. In addition, the cross sections as a function of the difference between the azimuthal angles of the photon and jet ∆φ γ−jet , invariant mass m γ−jet and scattering angle cos θ = tanh(y γ − y jet )/2 have been reported.
We confront our predictions with the CMS [1] and ATLAS [3,4] The kinematical cuts and anti-k T algorithm radius R jet implemented in the experimental analyses [1,3,4] and in our calculations.
of both initial and final state parton showers) are plotted with scale uncertainties depicted as green and yellow shaded bands, respectively. To estimate these uncertainties we used the JH'2013 set 1(2)+ and JH'2013 set 1(2)− gluon distributions instead of default JH'2013 set 1(2) density. These two sets represent a variation of the renormalization scale used in the off-shell production amplitude. The JH'2013 set 1(2)+ stands for a variation of 2µ R , while JH'2013 set 1(2)− refects µ R /2 (see [33]). To estimate the scale uncertainties in the quark-involving subprocesses (3) -(6), calculated in the collinear QCD factorization, we have varied the scales µ R and µ F by a factor of 2 around their default values. Separately we show the contribution of off-shell gluon-gluon fusion subprocess (1), calculated with k Tfactorization.
As one can see, we achieved good agreement of our predictions with the CMS [1] and ATLAS [3,4] data in the whole kinematical region within the experimental and theoretical uncertainties. The predictions from JH'2013 set 2 gluon are somewhat lower than those from JH'2013 set 1, especially for the distribution in scattering angle θ (see Fig. 4). The reason for this lies in the additional limitation of the phase space in these measurements [4], namely, cos θ < 0.83, m γ−jet > 161 GeV and |y γ + y jet | < 2.37, which moves the probed kinematical region to somewhat larger x. Note that the measured distribution in cos θ is sensitive to the γ + jet production dynamics and well reproduced in our calculations with JH'2013 set 1 gluon. One can see also that the off-shell gluon-gluon fusion subprocess (1), in which we are mainly interested, dominates at low and moderate transverse energy (E γ T ≤ 120 or 150 GeV) and practically does not contribute at larger values 3 . So, the subleading quark-induced subprocesses (3) -(6) are important to achieve an adequate description of the data in the whole E γ T region. Similar conclusions were made earlier in [17] in the case of associated Z boson and heavy quark jet production at the LHC.
As was noted above, the initial state parton shower in cascade is based on the CCFM evolution equation, while the final state parton shower is based on the DGLAP equations. To investigate the influence of parton showers in a final state for description of the LHC data, we repeated the calculations with taking into account parton showers in initial state only. These results are presented in Figs. 1 -4 by dashed histograms. We find that the final state radiation effects are quite negligible in most of the distributions, excluding only the region of very small ∆φ γ−jet (see Fig. 3).
As a last point of our study, we present results of our calculations where the simple model [20] has been applied in the jet selection procedure, similar to previous evaluations [19] (dash-dotted histograms in Figs. 1 -4). As one can see, the achieved overall description of the considered experimental data is systematically worse, both in normalization and shape. Although the simple approach [20] is able to describe more or less adequately the measured E γ T distributions in the some kinematical region (as it is shown in Fig. 2), it fails for more exclusive observables, such as ∆φ γ−jet variable (see Fig. 3). Thus, it indicates again the importance of taking into account contributions from initial state parton showers for the proper determination of the leading jet in the k T -factorization approach.
Conclusion
We have considered associated production of prompt photon and hadronic jets at LHC conditions. The calculations were performed in a "combined" scheme employing both k Tfactorization and collinear factorization in QCD, with each of them used in the kinematic conditions of its best reliability. The dominant contribution is represented by the off-shell gluon-gluon fusion subprocess g * g * → γqq. Several subleading quark-induced subprocesses contributing at O(αα s ) and O(αα 2 s ) have been taken into account in the conventional collinear scheme. To reconstruct correctly the kinematics of the hadronic jets the TMD parton shower generator cascade has been applied.
Using the TMD gluon densities derived from the CCFM evolution equation, we have achieved reasonably good agreement between our theoretical predictions and the CMS and ATLAS experimental data. We have demonstrated the importance of initial state parton showers for jet determination in the k T -factorization approach. Figure 1: The differential cross sections of associated γ + jet production at √ s = 7 TeV as function of the prompt photon transverse energy E γ T in different regions of rapidities. The green and yellow shaded band represent the results obtained with JH'2013 set 1 and set 2 gluon densities (with scale uncertainties). Dashed histograms corresponds to the predictions without final-state parton showers, dash-dotted histograms correspond to the results, obtained with simple approach [20]. Separately shown contribution from the off-shell gluongluon fusion subprocess (1). Everywhere the JH'2013 set 1 gluon density was used. The experimental data are from ATLAS [3]. . Figure 2: The triple-differential cross sections of associated γ + jet production at √ s = 7 TeV as function of the photon transverse energy in different regions of rapidities. Notation of histograms is the same as in Fig. 1. The experimental data are from CMS [1]. Figure 3: The differential cross sections of associated prompt photon and jet production at √ s = 7 TeV as functions of photon transverse energy E γ T , jet transverse momentum p jet T , jet rapidity y jet and azimuthal angle difference between the prompt photon and the leading jet ∆φ. Notation of histograms is the same as in Fig. 1. The experimental data are from ATLAS [4]. The differential cross sections of associated prompt photon and jet production at √ s = 7 TeV as function of scattering angle cos θ and the invariant mass of the prompt photon and the leading jet. Additional cuts cos θ < 0.83, m γ−jet > 161 GeV and |y γ + y jet | < 2.37 are applied. Notation of histograms is the same as in Fig. 1. The experimental data are from ATLAS [4]. | 4,575.2 | 2019-06-24T00:00:00.000 | [
"Physics"
] |
Platelet-rich plasma affects the proliferation of canine bone marrow-derived mesenchymal stromal cells in vitro
Background Reported efficacy of platelet-rich plasma (PRP) in regenerative medicine is contradictory. We validated the effects of PRP on proliferation of canine bone marrow-derived multipotent mesenchymal stromal cells (K9BMMSCs) in vitro. PRP was extracted from blood of six dogs with osteoarthritis. K9BMMSCs were established from bone marrow and characterized for CD90 and CD19 expression by immunocytochemistry. Effects of PRP concentrations on viability of matching autologous K9BMMSCs were validated using MTS assay. Results Positive CD90 and negative CD19 expression confirmed MSC origin. PRP at 40% volume/volume concentration increased, while PRP at 80 and 100% v/v concentrations suppressed viability of tested K9BMMSCs. Conclusion PRP concentration plays an important role in K9BMMSCs viability, which could affect tissue repairs in vivo. Electronic supplementary material The online version of this article (10.1186/s12917-019-2010-x) contains supplementary material, which is available to authorized users.
Background
Platelet-rich plasma (PRP) is an enriched plasma containing variety of growth factors, including the platelet derived growth factor (PDGF), vascular endothelial growth factor (VEGF), transforming growth factor-β (TGF-β), fibroblast growth factor (FGF), and insulin-like growth factors I and II (IGF-I, IGF-II) [1,2]. These growth factors are potent chemoattractant and mitogens, which help attract and activate surrounding cells at sites of injury. Importantly, at sites of injury, PRP entraps mesenchymal cells and supports the proliferation and differentiation of surrounding endothelial, and other stromal cells resulting in accelerated wound healing [1,[3][4][5][6][7]. The proliferation and differentiation potentials of the multipotent mesenchymal stromal cells (MSCs) can be applied for the treatment of degenerative diseases, including osteoarthritis (OA) [8].
OA is a painful and debilitating orthopedic condition, affecting both humans [9] and companion animals [10][11][12]. This chronic disease is most commonly treated by anti-inflammatory drugs, pain relievers and supplements [10]. In recent years, intraarticular injections of MSCs [13], PRP [14], or the combination of MSCs and PRP [15] has been investigated for the treatment of OA and other bone injuries. Positive results demonstrate the safety and efficacy of PRP application in general surgeries, oral and maxillofacial surgeries, plastic surgeries and soft tissue healing in tendons, ligaments and muscles have been reported [2,3,16]. On the other hand, no beneficial efficacies of PRP treatment were reported in the healing of human Achilles tendinopathy [17], human Achilles tendons [18], or for canine bone formation [19]. Despite mentioned promising results, a consensus on the actual benefits of PRP has not yet been established. Such variation in outcomes related to PRP treatment could be attributed to some aspects of study design, such as sample sizes and control selections, in addition to the type of disease under investigation [1]. Another contributing factor could be the concentration and volume of PRP used during these treatments.
In this study, we have evaluated the effects of PRP concentration on the cell viability of the autologous canine bone-marrow derived multipotent mesenchymal stromal cells (K9BMMSCs) harvested from client-owned dogs with a history of OA in vitro. These findings will help streamline the methodology for using PRP as one of the standards of care treatment for injuries.
Isolation and characterization of K9BMMSC cells
We successfully isolated K9BMMSC cells from six dogs diagnosed with OA (Table 1). K9BMMSC cells were isolated from the obtained bone marrow samples as shown in Fig. 1a. The mononuclear cell layer separated after centrifugation using LSM contained bone marrow-derived mononuclear cells (lymphocytes, monocytes, stem cells, progenitor cells, endothelial cells, and mesenchymal stromal cells) as shown in Fig. 1b. K9BMMSCs were cultured in complete DMEM/F12 media as shown in Fig. 1c and further characterized for expression of cell surface proteins using ICC staining. K9BMMSCs were confirmed to be positive for expression of CD90 (Fig. 2, left panel), a cell surface protein expressed in MSCs [20]. In addition, none of the K9BMMSCs expressed CD19 protein (Fig. 2, right panel), a transmembrane protein expressed only in B-lineage cells, which confirmed origin of isolated cells as MSCs [20].
Isolation and characterization of PRP
PRP was separated from blood collected from the same dogs as the K9BMMSC cells using the Dr.PRP PRP kit as shown in Table 1. Approximately 2-5 ml of PRP was separated from the central layer after centrifugation of blood (Fig. 1d). Platelets were counted using a hemocytometer, and the number of isolated platelets from each dog was reported in the Table 1, except PRP isolated from dog#4.
In addition, K9BMMSCs cultured in 0, 20, 40 and 60% PRP (v/v) concentration had elongated, spindle shaped morphology as shown in Fig. 4 and Additional file 1, while 80 and 100% PRP concentrations resulted in spherical morphology of the K9BMMSC cells as demonstrated in Fig. 4 and Additional file 1.
Discussion
PRP is an attractive treatment option for patients with joint injuries, including tendon and ligament injuries, especially in the realm of sports medicine and orthopedic surgery [21]. Numerous clinical trials have confirmed the benefits of PRP and its products for the treatment of orthopedic diseases in companion animals as well as in humans [1,22,23]. PRP treatment alone [24], as well as in combination with MSC, has been demonstrated to promote bone regeneration [25]. Use of PRP and autologous cells are ideal for in vivo applications due to fewer immune compatibility, safety, and ethical concerns. PRP can be used as an alternate source of serum for culturing of MSC [26,27]. A 5-10% platelet lysate in media was demonstrated to promote proliferation, expansion, colony formation, and differentiation of human MSCs compared to the fetal calf serum [26,27]. Recent studies have focused on optimizing the concentrations of PRP used for treatments due to variable effects on viability of co-cultured cells in vitro [28][29][30] and in vivo [31]. PRP at 1-5% (v/v) concentration induced rapid proliferation of canine alveolar bone cells after 7 days in culture, while PRP at 30-100% (v/v) concentrations suppressed proliferation of alveolar bone cells [28]. Another study demonstrated that 2.5-20% PRP (v/v) concentrations stimulated proliferation and migration of primary rat Schwaan cells at day 3, 5 and 7 in vitro, while 40% PRP (v/v) concentration suppressed their proliferation and migration [29]. PRP obtained from normal healthy donors at lower concentrations (v/v) stimulated proliferation of primary oral fibroblasts and osteoblasts, while higher concentrations (v/ v) of PRP were toxic to tested cells in vitro [30]. In addition to in vitro studies, in vivo studies have also confirmed the importance of using optimal PRP concentrations. PRP at low concentrations (2 × 10 6 /mm 3 ) promoted intestinal anastomotic healing in rats in vivo, while high PRP concentrations (5 × 10 6 /mm 3 ) impaired healing [31]. Similarly, positive effects on distal femur regeneration in New Zealand white rabbits in vivo was observed by optimal PRP concentrations (0.5-1.7 × 10 6 /mm 3 ); whereas low concentrations (0.16-0.37 × 10 6 /mm 3 ) was not effective and high PRP concentrations (1.85-3.2 × 10 6 /mm 3 ) inhibited bone regeneration [32]. Comparable to the previously published results in other cell lines and animal models, we observed that PRP concentrations with up to 50% (v/v) increased viability of K9BMMSCs cells, with PRP concentration of 40% (v/v) to be the most optimal. Our data demonstrated that viability of K9BMMSCs cultured at 60% (v/v) of PRP concentration was not different from cells cultured in complete DMEM/F12 media only. However, PRP at 80 and 100% (v/v) concentration were toxic to four of six tested cells. The exact mechanism of the effect of high concentration of PRP on cell viability is still unknown but is likely due to high concentrations of growth factors. It has been previously reported that PDGF and TGF-β1 growth factors concentrations are increased in concentrated PRP [31].
While our studies and numerous others demonstrate that concentrated PRP are not beneficial for cell culture, the described optimal concentrations vary between studies. These differences in the volume ratios, the quality and yield of PRP between studies can be attributed to preparation of PRP using variety of PRP preparation kits and procedures [33]. Furthermore, since PRP is isolated from different individuals, concentrations can be affected by difference in health status and condition of animal or person [34,35]. Thus, it is required to further validate the optimal PRP concentrations to determine the optimal number of enriched platelets, the volume applied and its concentration of growth factors for clinical in vivo applications [36]. In conclusion, our observations indicate that PRP at 40% (v/v) concentration was beneficial for the cell viability of K9BMMSCs, but 80-100% concentrations (v/v) had opposite effects.
Conclusion
Our results demonstrated that the concentration and volume of PRP affect the viability of K9BMMSC in vitro, which might have an effect on the treatment outcomes of OA in dogs in vivo. We confirmed that PRP at 40-50% concentrations (v/v) increased cell viability of K9BMMSCs, while high concentrations at 80-100% (v/v) inhibited cell viabilities in four out of the six tested K9BMMSC cell lines.
Animals
Six client-owned dogs with naturally-occurring OA were enrolled in this study according to the protocol approved by the Institutional Animal Care and Use Committee at the University of Tennessee (UT-IACUC). The owners signed the informed consent forms to agree to enroll their dog. Inclusion criteria at admission included clinical evidence of OA of the elbow or hip with a unilateral limb lameness greater than 5% between limbs on screening by force plate (FP) evaluation. Exclusion criteria included: presence of other major illness, orthopedic problems unrelated to our study, orthopedic surgery within the last 6 months, or skin infection of injection site for more than one week. Details regarding the dog's breed, passage number of cells used in MTS assay and number of isolated platelets are presented in the Table 1.
Bone marrow extraction
Approximately 4 ml of bone marrow was harvested from the proximal humerus of the lame limb of dog using a bone marrow aspirate needle (Fig. 1a) while the dog was under sedation. The procedures were performed by a Values represented here are mean ± S.E. of four replicates of PRP treated cells normalized to cells cultured in complete DMEM/F12 media only. Paired Student's t-test was performed to compare differences in cell viability of PRP-treated K9BMMSCs to untreated cells. Significance was determined at * p < 0.05, ** p < 0.01, and *** p < 0.001. K9BMMSC: canine bone marrow-derived multipotent mesenchymal stromal cells; PRP: platelet-rich plasma board-certified veterinary specialist according to the approved UT-IACUC protocol. Bone marrow was mixed with 0.1% Citrate-dextrose solution (Santa Cruz Biotechnology, Dallas, TX) to prevent coagulation. The obtained bone marrow was immediately processed for isolation of K9BMMSCs.
Isolation and expansion of K9BMMSCs
The harvested bone marrow mixture was diluted in 1x PBS and strained through a 70 μm nylon cell strainer (BD Falcon, Franklin Lakes, NJ) to remove blood clots. The mixture of bone marrow with PBS was slowly pipetted over 15 ml of Lymphocyte Separation Medium (LSM) (MP Biomedicals, LLC, Santa Ana, CA) and centrifuged at 1,000 rpm speed. The middle layer containing K9BMMSCs was gently isolated and washed with PBS (Fig. 1b). RBC lysis buffer was added to remove any residual red blood cells, followed by additional washes with PBS. The isolated K9BMMSCs were seeded in complete DMEM/F12 media (GE Healthcare Life Sciences, UK) supplemented with 10% fetal bovine serum, 100 IU/ml penicillin, and 100 μg/ml streptomycin and cultured in a 5% CO 2 incubator at 37°C. Cells were passaged when they reached 70-90% of confluence (Fig. 1c). Isolated K9BMMSC cells at passages 4-14 were used in our experiments.
Extraction of PRP
PRP was isolated from whole blood using the Dr.PRP PRP kit (Dr.PRP USA, Missouri City, TX) following manufacturer's instructions. Briefly, 18 ml of blood was drawn from the jugular vein with an anticoagulant syringe and placed in a blue PRP container. After centrifugation, separated PRP layer at the middle of the container (Fig. 1d) was extracted using a syringe. Isolated platelets were counted in a hemocytometer using the BMP solution following manufacturer's instructions (LeukoChek, Gardner, MA). The isolated PRP was stored at − 80°C freezer until further use. K9BMMSC cells were cultured at a density of 5 × 10 5 / well on a 4-chamber slide and allowed to attach for 24 h. The ICC staining was performed according to a protocol as previously published [37]. Attached cells were washed in PBS and fixed in 4% paraformaldehyde solution for 10 min. Cells were rinsed twice in PBS and permeabilized using 0.1% Triton X-100 in PBS for 5 min. Cells were blocked in normal goat serum (Biogenex, Fremont, CA) and incubated with the anti-rat CD90 (AbD Serotec, UK) and anti-mouse CD19 (Millipore, Billerica, MA) primary antibodies overnight at 4°C followed by incubation with secondary antibodies and then streptavidin conjugated with horseradish peroxidase (HRP, Biogenex, Fremont, CA) and visualized by a substrate 3,3`diaminobenzidine (DAB, Vector Laboratories, Burlingame, CA). Cells were stained with diluted hematoxylin and following gradual dehydration were finally cover-slipped with xylene-based mounting media. Stained slides were evaluated, and images were captured by an Olympus DP73 camera (Hunt Optics and Imaging, Pittsburgh, PA) attached to a Leitz DMRB microscope (Leica) using cellSens Standard software (Olympus, Center Valley, PA).
MTS assay
K9BMMSC cells were plated at a density of 5,000 cells/100 μL/well on a 96-well tissue culture plate in complete DMEM/F12 media and allowed to attach for 24 h. Cells were then treated with PRP at 0, 20, 40, 50, 60, 80, and 100% (v/v) concentrations in complete DMEM/F12 media and incubated for an additional 72 h. After treatment, cell viability of K9BMMSCs were measured using the MTS assay (MTS Cell Titer 96® Aqueous One Solution Cell Proliferation Assay, Promega Corporation, Fitchburg, WI, USA) following manufacturer's instructions. The obtained absorbance at 490 nm (FLx800 plate reader, Bio-Tek Instruments, Winooski, VT, USA) from PRP-treated cells was normalized to untreated cells and reported as mean ± SEM.
Image acquisition of cell morphology of K9BMMSCs
Images of K9BMMSCs in culture were captured by a MicroPublisher 3.3 camera (QImaging, Surrey, BC, Canada) attached to a Vista Vision microscope (VWR, Artisan Technology Group, Champaign, IL) using the Q-Capture Pro7 software (QImaging).
Statistical analysis
Statistical analyses were conducted using the Student's paired two-tailed t-test to establish difference between the PRP treated and control groups. Results were considered statistically significant at * p < 0.05, ** p < 0.01, and *** p < 0.001.
Additional file
Additional file 1: Figure S1. | 3,329.2 | 2019-07-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
Toward a service-based workflow for automated information extraction from herbarium specimens
Abstract Over the past years, herbarium collections worldwide have started to digitize millions of specimens on an industrial scale. Although the imaging costs are steadily falling, capturing the accompanying label information is still predominantly done manually and develops into the principal cost factor. In order to streamline the process of capturing herbarium specimen metadata, we specified a formal extensible workflow integrating a wide range of automated specimen image analysis services. We implemented the workflow on the basis of OpenRefine together with a plugin for handling service calls and responses. The evolving system presently covers the generation of optical character recognition (OCR) from specimen images, the identification of regions of interest in images and the extraction of meaningful information items from OCR. These implementations were developed as part of the Deutsche Forschungsgemeinschaft-funded a standardised and optimised process for data acquisition from digital images of herbarium specimens (StanDAP-Herb) Project.
Herbarium collections
A herbarium is a collection of preserved specimens of plants, fungi and algae. Herbarium collections contain >350 mil-lion specimens worldwide (1). These specimens and their related data provide a huge amount of valuable and useful information: descriptions of taxa in floras and monographs are largely based on herbarium specimens. They document the distribution of taxa and often come with additional information, e.g. about the habitat or uses of the plant at the place of collection. Specimen vouchers make it possible to review identifications (2), thus allowing to falsify the assertion of their taxonomic classification as well as any conclusion based on that. They also serve as raw materials for several kinds of analyses, with molecular techniques like generating deoxyribonucleic acid (DNA) sequences becoming increasingly important. They serve as reference material for a number of fields of biological research ranging from systematics to ecology and biodiversity. In particular, through their long history, herbarium collections document changes of plant biodiversity over time and in space. For example, with the help of specimen information, it became possible to provide substantial proof for the assumptions that and how temperature changes cause shifts in geographic distributions and flowering times (3). Herbarium specimens were also used to show how a change in air quality affects the plants and to trace movements of invasive species (3).
Information on plant specimens
The method of preserving vascular plants has hardly changed since its introduction 500 years ago (1): collected plants are pressed and dried and then fixed on paper sheets. The specimen is labeled with information like the scientific name of the plant, place and date of collection and the name of the collector and/or the name of the project or collecting event, a collection number, name of the determiner and notes on the habitat and habit of the plant. In some cases, the label is supplemented by further labels, for example correcting or amending the original scientific name (annotations) (Figures 1 and 2).
The scientific name of a species or infraspecific taxon refers to a nomenclatural type, i.e. normally a single specimen designated when the taxon was first described and named. Type specimens are often marked by a red rectangle or the word 'type' stamped on the sheet.
In preparation of the photographic recording of a specimen, a scale, and often a color chart, is mounted on the paper sheet in order to put the size and color spectrum of the specimen image in relation to the original specimen (4) ( Figure 1).
Herbarium data for research
In the following, the information located on the herbarium vouchers, in particular the information on the labels, is referred to as 'data' (5,6). The term metadata is used in the sense of Meon et al. (7) to designate transformed, enhanced and structured data. Databased specimen metadata allow wide-ranging searches and analyses of biodiversity information based on herbarium sheets. Search results serve for instance as a basis for the generation of time series and species distribution maps showing changes of species composition over time [e.g. (8,9)].
Especially for cross-disciplinary research questions such as those related to climate change, all available metadata should be utilized. International information networks like the Global Biodiversity Information Facility (http://www.gbif.org) and the Biological Collection Information Service (10) (http://www.biocase.org) publish freely available biodiversity data according to common standards such as Access to Biological Collection Data (ABCD) (11) and Darwin Core (DwC) (12), providing joint access to collection databases worldwide.
Mass digitization and data capture
Over the past years, several natural history institutions started industrial scale digitization processes for their specimen holdings.
Between set up a similar procedure (13) with ∼7 million objects (also from zoology) being digitized. In 2010, the Natural History Museum 'Naturalis' in the Netherlands started a program for mass digitization (14) and digitized 4 million herbarium specimens in 1.5 years.
As a result of this production-line digitization, a large throughput is achieved in the photographic recording of specimens. However, the manual capture of the data remains a bottleneck. At the same time, data capture a prerequisite for accessing and cross-linking the collections with their related information.
Automatic data capture
There are already a number of individual tools and services for automated data extraction from images available, but the functionality covered is incomplete and/or cannot be integrated via standard interfaces. We investigated such systems, but they did not fit in with our basic idea of an extensible open-source web service-based workflow platform. Symbiota (15) is a complete Herbarium image management software with additional functions to automatically extract data, rather than an extensible open source Application Programming Interface (API) for automatic data extraction. The Salix parser (16) is a stand-alone tool used to extract and classify information from text sequences and could be added as a service to our workflow system but does not cover other functionalities needed for a complete workflow. The apiary project which was focused on parsing specimen label data is not available any more. Up to now, there is no integrated and extensible workflow platform available which combines services and tools provided by the biodiversity informatics community (see Workflows and software architecture section below). With the aim of speeding up the data capture process, the StanDAP-Herb project developed a standardized workflow comprising web service modules, which carry out a stepwise (partly) automated capture of data contained in herbarium images, as described in the following.
Workflow
In this section, the concepts of services and workflows are outlined on which the specifications and development work documented in Workflows and software architecture and Outlook sections are based.
The StanDAP-Herb architecture is based on standardized interfaces and established IT standards as well as on a platform-independent technical specification, which can easily be adapted to specific platforms. This is to facilitate a long software life cycle as well as an easy integration of and into systems operated by third parties and to allow globally interacting services in workflows defined across organizational boundaries. Figure 4 outlines the StanDAP-Herb main workflow choreography, which is described as a set of interacting choreographies that constitutes the 'mainstream' processing of a digitized specimen. We choose a choreography diagram (17) because it is the adequate means for describing the global communication across enterprise boundaries. It describes the agreements between the main interacting participants with respect to documents, message types and sequences of message flows. The internal view of the processes is then further outlined by means of Business Process Model and Notation (BPMN) diagrams (17), which have been developed in the project and are partially outlined in Workflows and software architecture section.
A choreography task represents the interaction between two participants. The participant initiating the communication (e.g. 'Image Supplier' as shown in Figure 4 the uppermost choreography task) is placed above the entire choreography task (here: 'Initialize Repository'). The receiving participant (here: the 'Repository Service') is shown below the choreography task. The Image Supplier is assumed to be the owner of a graphics file representing a digitized herbarium specimen (BPMN artifact 'Specimen Image File'). The Repository Service is an instance that creates a global reference by means of which the file can be retrieved for further processing. The range of mechanisms for generating this reference may vary concerning the organization providing this service; for instance, the repository reference could lead to a cloud service where the specimen description files (SDFs) are stored.
By means of the repository reference, a participant named 'Image Operator' can retrieve the Specimen Image File and conduct some analytic work on the image. This work is executed by services that directly work on the image, e.g. extraction of the scale, identification of dots per inch (DPI) or location of text areas (cf. the imagebased services used in the 'Pre-OCR Workflow' defined in Workflows and software architecture section). The Image Operator adds the results computed by these services to the SDF, which is augmented during the process by gaining the results of each stage. In a further step, the Image Operator starts the processes for character recognition; in addition, to the optical character recognition (OCR) services working on machine printed text, services for handwriting recognition would be needed here.
In the next choreography task, the focus is on further elaboration of the texts which have been found on the herbarium specimen by the text recognition services. Besides information extraction in order to find categories such as the taxon, the collector or the owner of a herbarium sheet, a further task is to complete incomplete texts by means of tools for auto-completion or manual completion. The 'Text Operator', who conducts and coordinates all text processing activities, should thus be assisted by means of a proper user interface.
Data quality assurance is the topic of the next task ('Assure Professional Quality'); two basic mechanisms are distinguished: (i) Machine-based approaches: platforms such as 'Open Refine' (18) can be used to harmonize many faceted information collected in the SDFs. (ii) Human-based approaches: in certain cases, expert knowledge may be needed in order to resolve conflicting statements or missing information; experts such as taxonomists are to be contacted in order to review and pass an expert opinion back to the operator. The Quality Assurance (QA) operator will then incorporate corrections and comments given by the specialists into the SDF.
Of course, both approaches may be combined interactively and iteratively. An example for the application in a workflow environment was set by the European Union (EU)-funded Biodiversity Virtual e-laboratory (BioVeL) project (19).
When the object and category files are complete and quality assured, they will be handed over to the 'Collection Manager', for whom the data have been investigated. The Collection Manager (or the 'Curator', respectively) can then store the metadata in a specimen database for publication. For this purpose, the metadata is to be converted into a standard format for biological collection data (ABCD or DwC). Before incorporating and releasing the metadata record, it is matched against existing data (in the local specimen database or in the network) and in case duplicates are found, the new metadata and the found duplicate have to be merged. For example, if a duplicate in the network contains additional information, this can be added to the metadata before generating the new database entry.
As soon as the metadata set is stored in the specimen database, it can be annotated, i.e. further data or changes to the metadata may be made. This can be done by the Collection Manager or Curator but also by other actors using the public data access. For the latter, an external 'Annotation System' has to be used [e.g. AnnoSys, (20,21)], which stores annotations made from all online portals where a metadata record set from a specimen database has been published. In our workflow, we are also assuming that Collection Managers can be informed as soon as annotations have been made in an Annotation System and that they verify these annotations and eventually incorporate them into their specimen database.
Since data curation of imaged collections poses a challenge to personnel resources of herbaria, an alternative procedure (in agreement with the Image Supplier) could be to store the gained metadata directly in an annotation repository like AnnoSys, i.e. to treat them as annotations to the original metadata. This permits an asynchronous processing of the specimen data, i.e. they may become accessible immediately. However, full use of temporarily or permanently stored extraction results-as annotations in data-driven research workflows-requires the integration of data access services for these annotations into the infrastructures of the corresponding data portals.
Services
We developed several web services to enable the implementation of workflows for processing digital herbarium specimens. Some services were developed from scratch; others are using third-party libraries and wrap already existing functionalities to make them available for our workflows.
In Table 1, all available services are listed. Image-based services are working on specimen image data. They have two goals. The first one is to optimize the results of data extraction by locating objects on herbarium images which provide information about the specimens. The second one is to pre-sort herbarium images by criteria like 'possibility of automatic processing' or 'handwritten label'.
Text-based services are working on text parts extracted by image-based services. They can recognize a set of entities like plant name, collector name, geographical coordinates and/or mentioned locations. These services use either dictionaries or regular expressions (22) for entity recognition.
To build a workflow, it is essential to know what a service requires as input and what can be expected as output. As long as required input and given output type of a service stay the same, the exact procedure to get from the input to the output can be changed without influencing other parts of already configured workflows. Table 1 shows that the output of some services can be used as input for others and that it is therefore possible to build chains of services to process.
Example workflows
To show the potential of the developed web services, three example workflows were developed. In the following, Figure 6. OCR workflow. these will be briefly described. All of them are based on three actors: the image supplier, the coordinator and the service provider. An example for an image supplier would be an institute that wants to analyze one of its digital herbarium sheets. The image supplier interacts with the coordinator whose function is executing an analysis pipeline by requesting different services provided by the service provider, aggregating the results and returning them to the image supplier. An extension for the open refine platform, supporting the execution of the pipelines here described and fulfilling the role of the coordinator was developed and can be downloaded under http://api.bgbm. org/standap/download/openrefine-extension.
Example 1: Pre-OCR Workflow
The OCR is one of the most important steps to extract data from digital specimen images. It is therefore crucial to optimize it to get the best possible results. The aim of the Pre-OCR Workflow is to compute the DPI of a given digital specimen image and find text areas within that image. These data can then be used to improve OCR results by letting the OCR software just process the identified text regions and therefore reduce noise, such as parts of the plant specimens that are interpreted as letters. DPI recognition is important because most OCR engines are specialized on processing images of a specific DPI. The restriction of the OCR process to areas of actual text also reduces the execution time.
Three services are used to create the Pre-OCR Workflow: Scale Matching Service. It uses template matching to com-pute the coordinates of the scale (see Figure 1) of a specimen image.
DPI Service.
It is able to compute the DPI of a specimen image out of the scale coordinates computed by the Scale Matching Service and the resolution and size of the SRI used in the Scale Matching Services template-matching process. Text Region Service. It can find the coordinates of text areas in specimen images.
In the following, it will be assumed that the workflow is started by an Image Supplier as the user who publishes the specimen images to get processed by the Pre-OCR Workflow and provides an SRI to the coordinator. An SRI is an image of the scale mounted on the analyzed specimen. For unique specimen image Uniform Resource Identifiers (URIs), we make use of a consistent system of Globally Unique Identifiers (GUIDs) agreed by the Consortium of European Taxonomic Facilities (25,26). The user provides the GUID(s) to the coordinator, which then first creates an SDF for each GUID, in which all information gained throughout the execution of the workflow will be saved. Next the Scale Matching Service is requested with each of the given GUIDs and the configured SRI as parameters. The resulting coordinates in combination with the known resolution of the SRI and the physical size of the scale are then used to request the DPI Service. With the returned DPI numbers, it is possible to invoke the Text Region Service, which responds with the coordinates of text areas found in the specimen images referenced by the given GUIDs. All results of the services are available in the SDFs, which are provided to the user by the coordinator at the end of execution. In the following, these files can be exported and saved for later usage, e.g. to start the OCR Workflow described in the next section. The BPMN of the Pre-OCR Workflow is shown in Figure 5.
Example 2: OCR workflow
The OCR workflow is used to find and read text located on a digital specimen image, which will then be analyzed to extract metadata. It is started by the user who generated the Specimen Description File (SDF) using the pre-OCR workflow as detailed above. The user can feed in several SDFs at once. The coordinator parses the description file and requests the OCR service with the GUID and the coordinates of text areas previously found by the Text Region Service. The returned text parts are added to the SDF which is then returned to the user. The BPMN of the OCR Workflow is shown in Figure 6. For this approach, it is necessary that the requested OCR service supports the processing of image parts by coordinates.
Example 3: Extractor workflow.
The extractor workflow combines all text-based extractor services and thus enables the extraction of a maximum amount of metadata from a given text. The following services are available for the text provided: Scientific Name Extractor. It finds and parses scientific names. Botanist Name Extractor. It finds and parses botanist names. Date Extractor. It finds and parses collection, accession and determination dates. GeoCoord Extractor. It finds and parses latitude and longitude pairs. Location Extractor. It finds and parses location information and assigns a country.
The workflow is once more started by the user providing SDF(s). All Extractor Services are then requested with the text parts parsed from the SDF. The order of requests sent by the coordinator is not important. Figure 7 gives an example workflow.
Outlook
A key aim of the StanDAP-Herb project was the investigation of possibilities for combining a service-oriented architecture (SOA) with the advantages of a flexible workflow management system. As outlined in Outlook section, this flexibility is needed for handling changing requirements such as (i) managing entry points at which external organizations can use (parts of) the StanDAP-Herb workflow, (ii) enabling external organizations to integrate their services into the StanDAP-Herb workflow chain and (iii) providing a high degree of automation whilst easily permitting manual interaction at specific stages.
Many of the workflow systems that have been investigated like Taverna (27), Kepler (28) and Argo (29) do not fulfill the StanDAP-Herb requirements in one or more respects. Some lack the possibility to describe complex workflows in a standardized notation such as BPMN; others do not offer a comprehensive run-time support for integration of services in the context of an SOA.
As a result, we identified the Activiti workbench (http://www.activiti.org/) as the most promising system. Activiti can be used as a library or a service; it provides a number of tools for design and deployment of complex workflows, e.g. graphical representations of workflows, support of BPMN 2.0 constructs, integration of user management and specification of forms in the workflow definition. The Activiti service interface can be used to deploy a process.
However, as with other tools, it is necessary to write Java classes for that purpose. At present, a solution that can provide full integration of SOA by configuration is to our knowledge not available. Consequently, one of the main StanDAP-Herb ideas can only be realized by development of more sophisticated background technology. Future research and development should focus on this item.
To make the created web services available for a larger user base and for testing purposes, the OpenRefine extension mentioned above was developed, offering easy access to the basic usage of the described web services as well as access to Herbadrop (30) OCR results. Further develop-ment of the extension is necessary to increase the usability of the described work.
The requirement of easy integration of third party services into the StanDAP-Herb installation mandates direct usage of available registries for biodiversity services such as the 'Biodiversity Catalogue' (http://www. biodiversitycatalogue.org/). Principally, such web-based services offer Representational State Transfer (REST) or originally Simple Object Access Protocol (SOAP) interfaces (the trend is going to the REST interface that is easier to use). For future applications, we recommend usage of messageoriented middleware (MOM) technology (31) that can provide 'publish/subscribe' schemata. Moreover, a MOM broker such as RabbitMQ (https://www.abbitmq.com/) or QPID (https://qpid.apache.org/) can provide queue mechanisms for delivery and processing of messages and thus directly support connection of external organizations to a StanDAP-Herb workflow installation. | 5,024.6 | 2018-10-08T00:00:00.000 | [
"Computer Science"
] |
A comparison on the performance and risk diversification benefits of real estate investment trusts in Malaysia and Singapore
This study analyses the investment performance and effectiveness of risk diversification between M-REITs’ and S-REITs’ by comparing their respective Sharpe Ratio, Treynor Ratio and Jensen’s Alpha including the diversification measures (unsystematic risk divided by total risk and one-minus R squared) calculated on each REITs. The study period for M-REITs’ extends from 2007 to 2016 and for S-REITs’ from 2002 to 2016. Results shows that M-REITs’ perform better than S-REITs’ in terms of Sharpe ratio, Treynor ratio, and Jensen’s Alpha. Total risk of S-REITs’ are higher than M-REITs’. The Beta values for both M-REITs’ and S-REITs’ are less than one, implying that both categories of REITs are less risky than the market index. M-REITs’ have lower R-Squared values than S-REITs’, which suggests that M-REITs’ are poorly diversified than S-REITs’ and therefore, M-REITs’ have more diversification opportunities. The diversification measures computed for M-REITs’ are higher than S-REITs’ and would imply that M-REITs’ have better rate of returns if M-REITs’ diversify their risk (higher risk diversification benefits). The findings from this study aims to help investors to make better investment decision when investing in M-REITs’ and S-REITs’. Top and poor performers of M-REITs’ and S-REITs’ are determined in this study. The findings from this study aims to assist investors determine better investment decisions when considering investing in M-REITs’ and S-REITs’.
Introduction
Properties in the sixties were expensive for most people because properties are classified as highly priced assets. Nowadays, Real Estate Investment Trusts (REITs) have become a mainstream of investing in real estate in many countries. Malaysia Real Estate Investment Trusts (M-REITs') and Singapore Real Estate Investment Trusts (S-REITs') have been developed since the 20th century. Significant studies have been conducted in the past to provide a thorough understanding on the important contribution of REITs to broader equity market in order to provide valuable information and guideline to investors and other stakeholders. The purpose of this research is to investigate and compare the investment performance and risk diversification features of M-REITs' and S-REITs' using the classical but well-known standard performance measurement methods, namely Sharpe Ratio, Treynor Ratio and Jensen Alpha. The hypotheses examined for this study are as follows: This research will examine the performance and risk diversification capabilities of M-REITs' and S-REITs'. The research will provide investors insights into the performance and risk diversification benefits. The research intends to develop on existing literature by analysing and providing evidence on the performance and risk diversification benefits of M-REITs' and S-REITs'.
Literature Review
Findings by Hamzah, A.H. et al. (2010) showed the extent of systematic risks in M-REITs' during the economic crisis and post-crisis period while determining the degree of return that REITs will offer compared to the market portfolio during that particular period. The research signified that the risk-adjusted performance of M-REITs' varied from time to time. The overall result of the study concluded that the systematic risk of M-REITs' was significantly higher during economic crisis period compared to post-crisis period. Ng, Lim, Lau, and Yuen (2015) had analysed risk-adjusted performance of sixteen listed property trust in Malaysia from year 2007 to 2015 using the three main standard performance measurement tools: Sharpe Ratio, Treynor Ratio and Jensen's Alpha to estimate risks, returns and riskadjusted performance of the respective M-REITs'. They suggested that investors who invested in M-REITs' will provide a preferable return because all the M-REITs' outperform the market benchmark during that particular period of time. This was consistent with Smith & Shulman (1976) findings that REITs tend to provide a higher return than the market index and saving accounts. Markowitz's (1952) portfolio theory which emphasized the concept of accomplishing a desirable fund performance for any risk level by utilizing leverage as an evaluation tool. The results from the study provided an optimal impression to investors by assisting them in their investments into REITs. In the Singapore context, Liow (2001) conducted a study to investigate the risk-adjusted investment performance of S-REITs' and its property stocks over the past 25 years. The study employed Sharpe Index and Jensen-Varying Abnormal Return Index to examine the risk-adjusted performance and portfolio return of all the S-REITs' from 1975-1999. The outcome of the study implied that the S-REITs' outperform against the market portfolio with the higher returns and lower risk levels. Another study investigated the overall risk-adjusted performance on Singapore financial vehicles from 1975-1995 such as stock market, property stocks, residential, commercial, and industrial properties (Liow 1997a). The results of the study concluded that there was a significant difference between the excess return generated from owning direct properties and the excess return generated from owning property stocks such as REITs. Peng Liu (2010) identified certain corporate finance issues involving REITs. Capital structure, corporate governance, dividend pay-out policy, and initial public offerings are the main issues involving REITs. The study also explored several regulatory constraints or requirement required to develop a REIT such as distribution requirements, asset requirements, income requirements, and ownership requirements. Hartzell, Sun, and Titman (2006) examined how the corporate governance of a firm affects the REITs investment decisions. The study found that the investment performance of REITs was really dependent on how well the REIT conducts its corporate governance. REITs that adhered to corporate governance lead to positive real estate investment opportunities. The present study contributes to and extends the evaluation on the risk adjusted performance of S-REIT by employing the same performance measurement method as it is applied to M-REIT, and compare the results between the M-REITs and S-REITs with the classical measurement tools of Jensen's Alpha (1968), Sharpe (1966) and Treynor (1965).
DATA AND METHODOLOGY
The sampling data consist of 16 M-REITs' for the period from 2007 to 2016 and 26 S-REITs' for the period from 2002 to 2016. return of the investment will changed in a steady pace. The calculation for standard deviation is shown below: x 100 (2) whereby, Rindex = Index for week t It = Closing index value at the chosen day of week t It-1 = Closing index value at the chosen day of week before week t The standard deviation of each REIT was computed and subsequently, interpreted to determine the volatility each of the REIT against the respective property index (ie FBM Kuala Lumpur Property or FTSE ST REIT). Standard deviation of REITs is a statistical measure of the volatility of the sample weekly return for each REIT. An investment portfolio that has a lower standard of deviation as compared to its benchmark value, may seem preferable for risk averse investors. This is because the lower the value of standard deviation, the lower the risk or uncertainty within the portfolio (for example, the return of the portfolio does not change dramatically over a period of time). An investor who prefer investments with a low standard deviation, the implications will be that the potential Whereby: Xi = weekly return of REITs μ = the mean return of REITs for the year (%) n = sample period (years) Apart from the above, the total risk were computed, which include systematic risk, unsystematic risk and diversification measure, and compared among the 26 S-REITs' and 16 M-REITs'. The formula for calculating the total risk is shown below: σi2= βi2 . σm2 +σe2 (4) Whereby, σi2 = Total risk for REITs βi2 = Square of Beta of REITs σm2 = Variance of return of the market portfolio βi2 σm2 = Systematic risk of REITs σe2 = Unsystematic risk of REITs Two methods were used to calculate the diversification measures of the REITs. Diversification can be defined as a process of allocating capital in order to reduce the exposure to risk. In the investor's' perspective, diversification is a way to reduce volatility by investing in a variety of assets.
The first method is by dividing the unsystematic risk with total risk. If the ratio is closer to 0, it implies that the unsystematic risk of the REIT is less significant. The calculation for diversification measure is shown below: whereby, 2 = Unsystematic Risk of REITs 2 = Total Risk of REITs The second method is one minus R-Squared (1 -R-squared). If the diversification value computed is high, this would mean that the diversification opportunities or risk diversification benefits is high. If the value is near to zero, this means that there is less unsystematic risk in the portfolio and more systematic risk, which cannot be diversify. However, if the diversification measure has a value that is near to one, it would mean that the portfolio consists mainly of unsystematic risks which can be diversify. The calculation of diversification measure is shown below: In addition, the R-square of each REIT are also computed to examine the market movement of a security or portfolio that can be predicted by the movement of portfolio benchmark. The R-squared demonstrates the relationship between the total risk and systematic risk, as it explained the degree of total risk being affected by systematic risk. The formula for calculating the R-Squared value of REITs is shown below: whereby, 2 = R-Squared 2 = Square of portfolio's beta σm2 = Variance of return of the market portfolio βi2 . σm2 = Systematic risk component of REITs σi2 = Total risk The higher the value of R-Squared, the higher the chances of a security or portfolio moving in the same direction with the market index. A high value R-Squared indicates that the inherent total risk within a REIT is aggressively affected by the systematic risk and vice versa. In contrast, if the R-Squared has a low value, it denotes that the security or portfolio does not move along with the market index. In another words, a portfolio with a low value of R-Squared does not act much like the market index. Subsequently, the risk-adjusted performance measures of the REITs are computed using the Sharpe Ratio, Treynor Ratio and Jensen's Alpha to ascertain how the REITs are performing against the risk estimated, as well as, identify the possible excess return from each REIT against the market index. The Sharpe Ratio calculates the excess return earned in the excess of the free rate of return per unit of standard deviation in an investment portfolio. In calculating the Sharpe Ratio, the three main components are free rate of return, average return of the portfolio, and standard deviation or volatility. The standard deviation is used to present the diversity of the returns over a sampling period. The calculation for Sharpe Ratio is shown below: whereby, SR = Sharpe Ratio ri = average return of REITs rf = risk free rate of return = standard deviation of REITs The higher the value of Sharpe Ratio denotes that the portfolio generate a greater return against the portfolio benchmark. Likewise, the Sharpe Ratio with a negative value represents that the portfolio generates a lesser return against the risk-free rate of return. Treynor Ratio is a measurement of the return generated from the investment portfolio on a risk-adjusted basis. The calculation of Treynor Ratio is shown below: whereby, T = Treynor Ratio ri = average return of REITs rf = risk free rate of return i = beta of portfolio To justify the performance of REITs, a Treynor Ratio with a positive value is always preferable in REIT markets. The higher the Treynor Ratio is, the greater the return generated from portfolio against the portfolio benchmark. A Treynor Ratio with a negative value indicates that the estimate of the performance of a REIT is not so optimistic. Jensen's Alpha is an abnormal return evaluation tool that utilizes the capital asset pricing model (CAPM) to estimate the rate of return on the basis of market volatility by measuring the REITs' beta and compare it with the market beta.The given portfolio's beta denotes the volatility of the REITs at the market as a whole. It also represents the risk which has been arising along the market movement. The formula to compute Jensen's Alpha is shown below: Insert table 1 and table 2 Total market risk of S-REITs' is relatively higher compared to the total market risk of Those S-REITs' with high beta value contribute a high level of systematic risk to the S-REITs' market. From the M-REITs' perspective, the findings showed that the volatility of each M-REIT is low and contributes a lower level of systematic risk than the market portfolio. In short, it can be speculated that M-REITs' is a defensive investment portfolio which ensures the regular portfolio rebalancing; while S-REITs' is a speculative investment portfolio which presents more risks and uncertainties. Table 1 showed that the R-square value of all M-REITs' is relatively lower, with an average value of 0.0454 compared to S-REITs', with an average value of 0.3622, which is about eight times higher than M-REITs''. This can be interpreted that the fund of S-REITs' is highly diversified. In average, the total risks of both S-REITs' and M-REITs' are strongly affected by the unsystematic risk factors rather than systematic risk factors. However, the average diversification measure of M-REITs' is higher than S-REITs', which valued approximately 0.95464 and 0.63781 respectively. For M-REITs', the highest and the lowest diversification value ranged from 0.99942 (Al-'Aqar Healthcare REIT) and 0.87968 (Tower Real Estate Investment Trust). In contrast, the highest and the lowest diversification measure of S-REITs' ranged from 0.95365 and 0.41614, which are Cache Logistics Trust and Suntec Real Estate Investment Trust respectively. This implies that the M-REITs' has greater opportunities for diversification. Table 3 and Table 4 represents the Sharpe, Treynor and Jensen's Alpha ratio analysis of M-REITs' and S-REITs' respectively.
Insert table 3 and table 4 Based on the findings above, the decision rule is not to reject null hypotheses (H0) and reject alternative hypotheses (H1). (Table 5). Firstly, M-REITs' have lower beta compared to S-REITs'. Results of Sharpe ratio, Treynor ratio and Jensen's Alpha proved that M-REITs' have better risk-adjusted performance than S-REITs'. M-REITs' have higher average of Sharpe ratio, Treynor ratio and Jensen's Alpha compared to S-REITs'.
CONCLUSION AND IMPLICATIONS
This study conducted was to compare and analyse the overall performance among the M-REITs' and S-REITs' by The findings suggested that low-risk appetite investors could consider investing in M-REITs' because they carry lower risk as compared to S-REITs' and they outperformed the Malaysia T-Bills (investments in risk free rate of returns) and FBM Kuala Lumpur Property Index.
In conclusion, investors should plan their own strategy throughout their investment plans along with the essential technical analysis of the market cycles. Following by some of the major swings and volatility in the market as well as the economy, the REITs offer certain protection against capital loss, as well as to safeguard the investment values among the investors. Findings of this research may assist both investors as well as readers to understand the total risk involved take into account M-REITs' and S-REITs' by providing useful quantitative experimental valuation of the current and past performance of Malaysia Real Estate Investment Trusts and Singapore Real Estate Investment Trusts to assist investors choose the better investment tool. Moreover, quality of corporate management, trust management, asset quality and also the growth strategy of each REIT has to be evaluated by the investors in order to make a precise investment decision. | 3,543.4 | 2018-01-01T00:00:00.000 | [
"Business",
"Economics"
] |
Towards Realising FollowMe User Profiles for Macro-Intelligent Environments
: In this paper, we introduce the concept of a Large-Scale Intelligent Environment (LSIE) and provide an introduction to the use of bigraphs as a formal method for description and modelling. We then propose our MacroIE model as a solution to the LSIE problem and describe how that model may be implemented to achieve a continuity-of-experience to end users as they travel from place-to-place (a technology we call FollowMe ). Our initial experiments with these implementations are presented, providing some valuable insights and promise for future refinement towards real-world deployment
Introduction
Intelligent Environments are a wonderful example of ubiquitous computing [1,2], but ask two different research groups what an Intelligent Environment (IE) is, and you may get two fundamentally different answers.In our vision, an IE is a "common" space (such as a home, classroom or office) that contains a plethora of embedded computer devices that are interconnected and work together to enrich user experiences.These devices are generally controlled by a group of intelligent software agents that sense, reason and act to achieve certain goals on behalf of the user.Thus, an IE itself exhibits an ambient intelligence (AmI) quality that we as occupants perceive through environment adaptation.
An IE is able to recognise human occupants, reason with context and adapt itself to meet occupant needs by learning from their behaviour [3].The University of Essex has a purpose built IE called the iSpace, which is a fully functioning apartment (complete with bedrooms, kitchen, bathrooms, etc.), that has been augmented by a plethora of sensors and imbued with AmI.The iSpace contains false walls and false ceilings, allowing devices to be embedded directly into the fabric of the apartment.By using a distributed architecture for device deployment and interconnection, the iSpace acts as a template for the creation of new spaces.This architecture accounts for technology heterogeneity by using gateways that act as proxies for individual technologies; each gateway presents virtual devices to the IE network and translates action invocations into native actions (using some suitable middleware, such as Universal Plug and Play (UPnP) [4,5]).In this way, the technology deployment within an IE can be heterogeneous, but the network is homogeneous, and so, intelligent software agents can communicate with any device that is connected.This virtualisation approach also permits the creation of abstract devices; to give a practical example: each individual light in an environment can be represented on the network to expose control functionality (on/off/set-level).This allows remote control of the lighting, as well as the brightness level of each individual lighting unit.Having control over individual units, while desirable in some circumstances, is not always convenient.Ideally, we would also have control over groups of lights (e.g., living room lights, kitchen lights, and so on).Our architecture allows for these abstract groupings to also be natively represented.
Each IE has a set of users, each user owning their own user profile and each user profile containing a unique set of preferences and applications.Applications has a specific meaning in this context; users can create their own applications by composing resources that are available within an IE on an ad hoc basis.It follows that an event occurring (e.g., a DVD player powering on) would trigger a rule in an intelligent agent, causing the agent to dim the living room lights and close the curtains.The user could call this their "movie application", and it would be unique to that particular user.
The majority of our prior research has been conducted at this apartment scale and has led us to engage in a project called ScaleUp that is investigating the theme of increased IE deployment size towards Large-Scale Intelligent Environments (LSIE).ScaleUp is a collaboration between the University of Essex (UK) and King Abdulaziz University (Saudi Arabia), with the aim of addressing the scalability issues associated with realising a real-world LSIE deployment.Unfortunately, it is not as simple as taking the existing principles and applying those to a larger physical space.As the space grows in physical size, it becomes exponentially more expensive in terms of implementation time, money and resource management.It is a natural progression for these environments to scale up, so it is vital that these scalability issues are resolved (the outline of this project is given in greater detail in a previous paper [6]).
We begin this paper by providing a brief overview of work that relates to LSIE realisation (Section 2).This is followed by an introduction to the use of bigraphical notation in describing IE systems (Section 3)-a methodology we are currently experimenting with and hope to make popular across the field.The focus of this paper proposes the use of IE composition to realise an LSIE and address the main scalability issues; we call this a MacroIE (Section 4).To illustrate this, we describe how the user profile that is associated with a person can follow them from IE to IE (an extension of our previous FollowMe work).The work is made concrete through three implementation approaches (Section 5) and associated experimental results (Section 6).
Related Work
Some of the earliest purpose built IE examples are the "Intelligent room", built in Bristol, UK [7], and the iSpace, built at The University of Essex [8].Since then, the field has made massive strides of progress, with other spaces being built; examples include workplace environments: the "smart lab" at the University of Deusto [9]; and home place environments: the Phillips "HomeLab" [10], which is a fully functional apartment similar to the iSpace.The Cisco "Internet House", while larger than an apartment, was built to show an environment with an always-on internet connection and appliances that could be controlled via the internet (it also is an example of how different research groups interpret an IE differently).The model that these other environments propose would allow them to fit into the model this paper proposes, creating potential opportunities for collaboration between research groups, which would be the first step towards the unification of research in this area.
Much of the existing literature within the field reports on work that has had a focus on the internal aspects of an IE, but there are indications that the field as a whole is starting to consider IE deployment on a larger scale [11][12][13].
The increasingly popular movement of the Internet of Things (IoT) [14] is converging towards an IE ideology [15].IoT started life with the vision of creating an environment full of objects that were uniquely identified by pointers to a centralised database, in order to change a user's experience of that environment for the better [16,17].This vision has evolved and now has been stated as "Things having identities and virtual personalities operating in smart spaces using intelligent interfaces to connect and communicate within social, environmental and user contexts" [17].Although AmI was not part of the original IoT vision, the convergence of ideas has seen its incorporation [18,19].The IoT suffers from the similar problems of IEs in that there is very apparent fragmentation in the research taking place; many different bodies actually supplying different, yet overlapping definitions of the IoT [11,[20][21][22].It will be interesting to follow the trajectory of this research to see if it starts to overlap with the field of IEs further.
Publications are starting to appear that note that there is a desire to start scaling up existing implementations of pervasive computing [23,24].The fact that the majority of these publications only briefly touch upon the topic of inter-IE communication and scaling provides further emphasis on the originality and novelty of the proposed research area.Just as publications are starting to appear on scaling up these environments, other research groups have stated a need for a beneficial formal framework [25][26][27] through which to model and describe the various works.These papers tend to focus on implementations of proprietary methods for specific, existing implementations.It is apparent that the community needs a more abstract solution that would serve the entire community as a whole, rather than niche cases.Habib published a paper [28] on bringing together geographically separated IEs; however, this paper uses a virtual world in an attempt to bring together these geographically separated environments (similar to the concepts outlined in [29,30]).Our work is focussed more towards bringing together these environments in the physical world; though it would be feasible to say that these ideas could be implemented in the virtual world, as well.There has been some movement into creating larger scale environments, such as [31], but this particular instance is aimed at creating large data sets from a series of individual environments.
Bigraphical Notation for IE Description
There is an outstanding and recognised need within the IE field for common formalisms that enable IE design to be described and modelled.This is synonymous with the need for Unified Markup Language (UML) to describe and model software systems.The need is reflected by the diversity of description frameworks used across the field, leaving comparison of different models difficult.To address this problem, we have been examining the use of bigraphical representations to describe IE problems and design; this section provides a brief overview of bigraphs as a primer for later sections.
Bigraphs offer a way to diagrammatically describe and represent a system, so that it can be easily understood by visual inspection, whilst simultaneously encoding structure and entity relationships [32].This is all backed by mathematical principles that underpin the model and can be reasoned with robustly if the additional detail is required.As the name suggests, a bigraph consist of two graphs (Figure 1a): a place graph and a link graph.These two graphs share nodes.The place graph is restricted to being a tree (no cycles) and is contained within a forest, whereas a link graph tends to be a hyper-graph (a link can connect more than two objects) [33].By representing both components of the bigraph in one picture (Figure 1b), we can get an impression of object locations and connections simultaneously [34].Bigraphical Reactive Systems introduced in [35] are a "general graphical semantical framework for reactive systems unifying the feature of dynamic communication links introduced with the π-calculus [36] and the feature of mobile nested spatial structures introduced with the Mobile Ambients calculus [37]".Simply put, Bigraphical Reactive Systems are comprised of a set of bigraphs and a set of reaction rules, which are used to reconfigure the bigraphs.
Bigraphs offer a simple understanding of newly introduced concepts and show possibilities for specifying behaviours at more than one level of abstraction.In Figure 1, you can see that the nodes in the graphs represent physical entities (people, spaces, iPads, etc.), but they also include the notion of connections or communication.Figure 1b shows users connected to a device (which could be a user typing on their iPad) and these devices connected to the space (which could show the wireless connection to the space's local area network).However, bigraphs can represent more than just the physical layer; the nodes themselves can represent conceptual ideas, too, such as passwords, agents, bits of software, etc.The idea of the nodes is to be quite general.This generality allows us to gain different perspectives on all aspects of a system, whether it be how a user gains entry to a space or how the agents within a space communicate with one another.
While bigraphs are a high-level, visual formal model, it is possible to break them down into their algebraic form.To give an example of their use, Figure 2 shows a lecturer, Jenny, transitioning from outside an IE to inside, using a persistent key (which, in this case, is a keypad on the wall next to the door).By simple inspection, these bigraphs are not very formal, but are designed to fit Milner's vision for a hierarchy-of-models [38]; thus, we need a mathematically precise description of what a bigraph is: where V is a finite set of nodes, E is the set of hyperedges, ctrl is the control map that assigns controls to the nodes, prnt is the parent map that defines tree structure (place graph) and link is the link graph that defines the link structure [32].
This definition can be used to describe Figure 2 The dotted lines in Figure 2 represent regions; a region allows the bigraph to give a notion of locality for individual components (in this case, Jenny approaching the iSpace and uses the keypad on the wall of the iSpace to gain entry).We can represent more than one locality within a single bigraph.
Figure 3 shows that you may have multiple regions contained within the same bigraph which, in turn, means that the place graph will contain a forest of tree graphs, one for each region.
Figure 4 shows a scenario where Jenny is preparing a lecture in the iSpace, then physically travelling to the iClass (where the iSpace and iClass are geographically distal).
In the real world, it is likely that these environments will contain more than just a few members or a few devices; this could lead to the bigraphs becoming incredibly complex, very quickly.To counter this, it is possible to abstract away the intricate details, while keeping the notion that the space contains something.Consider that we wish to illustrate a member entering a space (as shown in Figure 5), but we don't need to know how the existing nodes within that environment are behaving.We can use a site to convey that the space contains other nodes, which will persist over the reaction rule, but are not relevant to the action performed.In Figure 5, there is a dotted square contained within S; this is the site.It provides enough information to say that S contains other nodes, but these nodes are not of interest to us in this reaction rule.This section has provided a brief introduction to the basic bigraph concepts, such that the following sections are more easily understood.The use of bigraphs as a formal model for application within the IE field is an ongoing subject of research and is discussed at greater length in [39,40].
MacroIE: Realising an LSIE through Composition
In order to realise the creation of an LSIE, new approaches are required in order to account for scalability limitations of existing IE methods (from technical, security/privacy, management and usability perspectives).To solve these scalability problems, we propose the concept of a MacroIE (The word macro is used here as it is in the wider field of computer science-to define an input pattern that will create a larger, more complex output [41])-a single LSIE that is composed from a set of smaller IEs, rather than existing as a monolithic whole.This means each element of a MacroIE is autonomous, distributed and self-governing.This is in contrast to the Monolithic IE, which attempts a top-down approach to manage all the low-level details across the entire space.
Figure 6a shows the set of environments that are geographically co-located, but this need not be the case; the environments can be distal or proximal.The connection between each environment is electronic, so there is no requirement for them to be in the same campus, territory or, even, country.It is entirely feasible to interconnect several IEs from different countries to form a MacroIE that spans continents.Figure 6b illustrates that the regions represented in Figure 6a can be geographically sparse.This suggests that there are multiple perspectives to the model; the user perspective, management perspective, and so on.The model is also designed to be dynamic, allowing individual environments to be added/removed with relative ease (i.e., a MacroIE has a modular structure).This introduces some interesting insights into the way security management would work; as each user may have a unique view of the overall MacroIE, the traditional role-based security or user-based security models may not fit, requiring an entirely new model to be created.
Traditional computer systems authenticate a user at initial session login [42].This model can also be applied to an IE, requiring users to explicitly login, using some form of contextual credentials in order to access the assets within that environment.By realising a MacroIE, the login session can be shared among the component IEs, thus realising a continuity of experience without the obstructive need to create a new session when the user changes context (transitions from one IE to another).This is a concept we call FollowMe [43], as it enables a user profile to follow the user from place-to-place in an unobtrusive way.A continuity of experience is thus achieved by a user having continuous access to their digital assets and services, whilst also enjoying environment adaptation, where the spaces they inhabit are dynamically adjusted according to preference and context.
FollowMe forms a critical part of the MacroIE functionality/behaviour and influences both design and evaluation.We seek to realise FollowMe user profiles in an efficient, scalable and user-friendly way, similar to the way in which a mobile phone can roam between different cells, whilst maintaining minimal interruption to service.When a user roams between IEs, how does the environment obtain the relevant information (environment preferences, authentication and authorisation details, available applications) about that user?The abstraction of this problem is shown by the bigraph in Figure 7 and described below.The abstract solution shown in Figure 7 shows that once a user has gained access to the space, the profile for that user is acquired from somewhere and instantiated within that space.Although not explicitly mentioned, security has a vital role in the MacroIE model; thus, we have used a key to represent this security layer.This key could be any appropriate authorisation and authentication solution (something you know (e.g., username/password), something you have (e.g.Radio-Frequency Identification (RFID) tag) or something you are (biometrics) [44,45].) and is required by the user to gain access to the space (the need for that key disappears once the user has gained entry, as shown by the reaction rule).
MacroIE Implementation
In this section, we propose three approaches to implementing the abstract MacroIE; the first relies solely on the storage of user profiles in a cloud repository, the second relies on the use of a trusted device that a user carries with them (for example, a mobile phone), and the final solution employs a combination of the two (utilising both cloud and trusted device functionality).
Implementation A: Cloud Repository
Cloud computing is a term used to describe the delivery and/or consumption of computing services over a network (usually the Internet) [46,47].With cloud computing, the end user is not aware of the location of the service, but is merely concerned with the quality of the service.By using a cloud-based approach to store FollowMe user profiles, an authenticated user session will grant permission for an IE to retrieve certain portions of the user profile and instantiate them within the IE.As shown in Figure 8, the notion of some form of key is still required to identify the user and either trigger a login or session transfer.Upon success, the reaction rule shows that the key and authentication server play no further role.Each individual component in Figure 8 is shown to be contained within separate regions [48], but it is possible for them to be contained within the same region.Take this example: Jenny is a lecturer at the University of Essex.She is currently working in the iSpace preparing her next lecture.The time approaches for her to present the lecture, and so, she leaves the iSpace; everything she has been working on is being uploaded and stored in the cloud as she walks to the iClassroom.When she arrives at the iClassroom, she gains entry by waving her RFID tag over the reader, prompting her profile be downloaded from the cloud, which recognises her status as the teacher and automatically adjusts the lights in the classroom for teaching mode.She pulls up the lecture slides she was previously working on in the iSpace and is almost immediately prepared for the students to arrive in the classroom.
This scenario immediately presents some potential issues that will need to be resolved; if a connection cannot be established to the server, how can the user be authenticated and the profile made accessible?Does the environment load a blank profile locally and attempt to synchronise at given intervals?The use of formal methods to model, discover and solve these problems is therefore essential in the design process.
Of course, a cloud-based user-repository can be used for far more than simply storing, syncing and serving user profiles as static data packages; one of the more useful benefits of using a cloud-based approach is that it can be used as a scalable resource to host the execution of applications and deliver services remotely.The effect of this would manifest in each IE that the user visits (through service delivery).These services would be synonymous with the services provided by the space itself; the user would be unaware of what was providing the services, just that the services are available for consumption.One such function could be a messaging service; take the previous example of our lecturer, Jenny.Fellow lecturer Ingrid wishes to contact Jenny immediately but does not know where she is.The server would know which-if any-IE Jenny was currently active in and, providing Ingrid was also in a similar environment, she could send a message to Jenny via the server-acting as a simple routing host.Perspectives of the MacroIE were mentioned earlier, and this holds true for the services; as a simple user, the instant messaging service would allow communication between all their friends, but from a management point of view it would be possible to see who is active in which space.This is similar to how instant messaging traditionally works, but illustrates how the cloud server could be used as more than just a file repository.This leads into the concept of one's entire presence following them around the MacroIE, their preferences, their documents, communications...the possibilities are endless!
Implementation B: Trusted Device
A trusted device is a personal device that the user would carry around with them the majority of the time and have a level of ownership over to entrust certain rights.The most obvious current example would be a smart-phone, given their ubiquity in society and ability to run powerful applications.This trusted device would contain all the relevant information about the user on the device itself; upon successful user authentication, the environment would then retrieve the information from the device via an ad hoc network connection.This method has the advantage of not requiring an external connection (or a complex network), thus being ideal for remote locations, where a strong internet connection is not available (e.g., the International Space Station).
Figure 8 shows that the user still requires a key to gain entry to the IE (that could be verified against an authentication server or could be based on certificate validation).Advances in mobile technology (particularly Near Field Communications (NFC)) mean that the trusted device could be used to also present the key.
We go back to our lecturer, Jenny.Again, she is working in the iSpace, ready for her lecture in the iClassroom.This time when she leaves the iSpace for the classroom, the information is stored on her trusted device (which, in this case, is her smartphone).She is logged out of the iSpace when her trusted device is out of range of the environment (either the environment's local area network or some location system, such as Ubisense).On the way to the classroom, Jenny makes changes on the trusted device that affect her profile.When she comes into range of the classroom, the trusted device automatically connects to the network and sends a handshake message; if accepted, she will be logged into the space and granted access.As before, the information will be pulled off the device (including the new changes), and she will be ready to teach.
This presents some potential issues that will need to be resolved.In an ideal situation, every member would have the exact same trusted device, but in reality, the fragmentation of smartphone devices is somewhat prevalent in the consumer market, so it is of utmost importance that a suitable standard be established for this kind of interaction.
Implementation B: Hybrid
While there are situations in which each of the two previous implementation approaches are best suited, a hybrid approach could address the majority of use-cases and do so in a way that addresses the respective shortcomings.
Figure 9 shows that the user requires a key to gain entry to the IE.The reaction rule shows that once the user is authenticated, the space has an active connection to both the trusted device and the user profile repository.This allows access to both a user profile and cloud-hosted services.
Figure 9.A user presents a key that enables an Intelligent Environment (the iSpace, in this case) to verify the user session.Upon success, the user is granted access to the space, and the user profile is accessible from both the trusted device (TD) and the user profile repository.This hybrid approach provides a more complete system, as it inevitably cuts out some of the problems presented.The intent behind this approach is to use the trusted device to perform session-authentication and to store part of the user profile (for example, the more frequently used and static content), while the remainder is accessible from the cloud-based repository.The hybrid approach also permits the use of cloud infrastructure to host the execution of services and deliver them to the local user IE.By using this approach, a more efficient and fault tolerant experience can be delivered to the end-user.
Experimental Results
This section presents the results derived from initial experiments with prototype cloud and trusted device implementations.Each prototype provides functionality for the storage and retrieval of Extensible Markup Language (XML) encoded user profiles.Repetition was used to collect 100 samples for each experiment, where each sample records the amount of time required to acquire the user profile from the respective repository (cloud or trusted device-based).While there are several other metrics that we are also interested in, these initial experiments reflect the extremely important user requirement that these environments operate in a robust and real-time fashion.
The trusted device implementation was built using an iPhone 4 (running iOS 5.1.1)and written in Objective-C, with an SQLite database to store the XML encoded user profiles.An Android implementation has also been developed and tested, producing results consistent with the iPhone.An application was developed to perform user authentication by sending encrypted credentials (a unique token from the device, which is registered to the user profile); the environment checks that the token is valid and, upon success, authenticates the user assigned to that token in the environment.This is not the most efficient method of performing authentication, especially as there are many use cases where this would not work; but for the proof-of-concept experimentation, it was adequate.Figure 10 shows a simple UML sequence diagram showing the abstract authentication method of both the trusted device and the cloud server.All experiments were conducted in the Essex iSpace and iClassroom [29,49].
The two implementations were identical in functionality, with the trusted device implementation having an additional module to perform dynamic discovery (this was done via the UPnP protocol).By comparison, the network address of the cloud server was hardcoded into the IE software (for the sake of simplicity).When recording the execution times, the broadcast and discovery of the trusted device was not taken into account.The results for the trusted device experiments can be seen in Figure 11 below and show a good cluster between 18 and 28 ms; in terms of ensuring that the environment would adapt quickly to a user's presence, this is certainly acceptable.However, after each run of 20 samples, there is an abnormally high result.This was always the first execution of the device; we can account for these erroneous results due to software libraries being loaded, memory initialisation and network related delays.If we drop the top and bottom five percent of the results, the remainder lie between 18 and 28 ms (as shown in Figure 12).This results in an average time of 20 ms for the trusted device-with a standard deviation of 0.004-proving that the results are very consistent.The results of the cloud-based experiments are consistent with the performance of the trusted device; however, in the case of the cloud repository, these times are spread over a wider range than the trusted device with an average time of 46 ms, spread over a standard deviation of 0.17, which is considerably larger than the deviation of the trusted device.This is attributed to the fact the user profile was being sent over a larger network (the Internet) in comparison to the trusted device (that was connected to the local network).Figure 13 illustrates this.As with Figure 12, the upper and lower five percent of the results were removed from Figure 13.The cloud server was based around a Jersey server implementation (The Jersey framework implements the Java API for RESTful Web Services (JAX-RS) reference, but also provides its own Application Programming Interface (API) that extends the toolkit).This was supplemented by a simple MySQL database to allow the user profiles to persist between transactions.More technical information about the cloud server solution can be found in [50].
An overlay of the results from Figure 13 with Figure 12 is shown in Figure 14.It is clear that the performance of the trusted device solution is superior, as reflected by the quicker average performance time (23 ms vs. 46 ms respectively) and consistently performing within a tighter margin (highlighted by the standard deviation of the two; 0.004 and 0.17, respectively, again).
However, the results presented exclude the dynamic-discovery overhead incurred by the UPnP framework required to find the trusted device on the network.Figure 15 shows another overlay of the cloud and trusted device results, but this time, the trusted device results include the overhead incurred by dynamic-discovery.It is plain to see that the results are now similar, as reflected by the new average and standard deviation for the trusted device; 35 ms and 0.16, respectively.While the trusted device still performs faster on average, the spread of these results is very similar to that of the cloud server.This is an inherent problem of using a dynamic-discovery mechanism to locate resources at runtime [51] and provides an indication for an area of improvement in the trusted device implementation.
Conclusions and Future Work
As work in the field of Intelligent Environments continues towards large-scale real-world deployment, there is a clear and present need to not only solve the technical problems, but to also address the issues surrounding description and modelling of such systems.Through our numerous works, we are investigating these problems, whilst also remaining cognisant as to the critical nature of usability and social acceptance.This paper has introduced the area of Large-Scale Intelligent Environments (LSIE), in general, and has provided an introduction to the use of bigraphs as a formal method for description and modelling.We have also introduced the MacroIE model as a proposed solution to the scalability problems surrounding LSIE realisation.To provide some context to this model, we have also described implementation strategies, built two prototypes and provided experimental results regarding performance times of those prototypes.
The results that we have presented are promising and help identify the benefits of the implemented solutions, whilst also enabling us to select parts of the solution that need optimisation.
In addition to the many scalability benefits of the MacroIE model, it also addresses a key real-world problem of heterogeneity-the model is not concerned about the specific implementation of each individual IE, but relies on encapsulation to abstract an IE into a homogeneous user profile API.We are currently updating our experimental labs to conform to this model, so that we may conduct more in-depth studies beyond the proof-of-concept work reported here.We hope that this will enable us to verify and refine the model, whilst also allowing us to establish some best practices (in particular, which implementations work best under which circumstances).
Figure 1 .
Figure 1.This figure shows the underlying bigraphs (a) containing the link graph and place graph, respectively, and, then, the result of representing both the graphs in one picture (b).The information nodes repsent differ based on context; in this particular instance S represents a space, U represents a user and D represents a device.
Figure 2 .
Figure 2. Jenny uses the keypad to gain entry into the iSpace.The dotted line represents a region.
Figure 3 .Figure 4 .Figure 5 .
Figure3.A user presents a key that enables an Intelligent Environment (IE) (the iSpace, in this case) to verify the user session.Upon success, the user is granted access to the space and the user profile is accessible directly from the trusted device.The encapsulated node TD depicts the user's trusted device.
Figure 6 .
Figure 6.(a) Shows an Large-Scale Intelligent Environment (LSIE) that consists of a network of smaller, individual Intelligent Environments (IEs); (b) shows that these individual IEs have no requirement to be geographically co-located to be considered within a MacroIE.
Figure 7 .
Figure 7.The key is used by the user to gain access to the Intelligent Environment.
Figure 8 .
Figure 8.A user presents a key that enables an IE (the iSpace, in this case) to verify the user session by communicating with an authentication server.Upon success, the user is granted access to the space, and the user profile is accessible from the user profile repository.
Figure 10 .
Figure 10.Unified Markup Language (UML) sequence diagram of the user authentication in an environment.
Figure 11 .
Figure 11.Scatter graph showing the raw results from the trusted device experiments.
Figure 12 .
Figure 12.Scatter graph showing the results with the erroneous data points removed.
Figure 13 .
Figure 13.Scatter graph showing the sparse results of the cloud-based implementation experiments.
Figure 14 .
Figure 14.Scatter graph showing both the trusted device and cloud results.
Figure 15 .
Figure 15.Scatter graph showing both the cloud and full trusted device results. | 7,806.4 | 2013-07-30T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Stability of Stochastic Differential Delay Systems with Delayed Impulses
and Applied Analysis 3 3. Main Results Before establishing the main results, we derive the following lemma, which is useful to present the main results. Lemma 2. Let assumptions (A1) and (A2) hold. Suppose that infk∈N{tk − tk−1} = β1 and (l1 − 1)β1 < d ≤ l1β1 for some positive integer l1. Then E|X (t)| p ≤ K1E ξ p τ , t ∈ [t0 − τ, t0 + d] , (9) whereK1 = 31(1 + h) l e 3 p−1 L(d p +d p/2 . Proof. Since (l1 − 1)β1 < d ≤ l1β1, the maximum number of impulsive times on the interval (t0, t0 + d] is l1. Suppose that the impulsive instants on (t0, t0 +d] are ti, 1 ≤ i ≤ m ≤ l1. For t ∈ (t0, t1), using (A1), we have E|X (t)| p = E ξ (0) + ∫ t t 0 f (Xs, s) ds + ∫ t t 0 g (Xs, s) dB (s) p ≤ 3 p−1 [E ξ p τ + (t − t0) p−1 E∫ t t 0 f (Xs, s) p ds +(t − t0) (p−2)/2 E∫ t t 0 g (Xs, s) p ds] ≤ 3 p−1 [E ξ p τ + L(t − t0) p−1 ∫ t t 0 E sup t 0 −r≤u≤s |X (u)| p ds +L(t − t0) (p−2)/2 ∫ t t 0 E sup t 0 −r≤u≤s |X (u)| p ds] ≤ 3 p−1 E ξ p τ + 3 p−1 L × [(t − t0) p−1 + (t − t0) (p−2)/2 ]
Introduction
Impulsive dynamical systems have attracted considerable interest in science and engineering in recent years because they provide a natural framework for mathematical modeling of many real world problems where the reactions undergo abrupt changes [1][2][3]. These systems have found important applications in various fields, such as control systems with communication constraints [4], sampled-data systems [5,6], and mechanical systems [7]. On the other hand, impulsive control based on impulsive systems can provide an efficient way to deal with plants that cannot endure continuous control inputs [3]. In recent years, the impulsive control theory has been generalized from deterministic systems to stochastic systems and has been shown to have wide applications [8].
In most of recent research results, the impulses are usually assumed to take the following form: Δ ( ) = ( + ) − ( − ) = ( ( − ), ), which indicates the state jump at the impulse time. However, time delays inevitably occurred in the transmission of the impulsive information. Hence, input delays should be considered (see e.g., [5,16]). In the context of stability of deterministic differential equations with delayed impulses, there have appeared several results in the literature (see e.g., [17][18][19]). For example, in [17], the asymptotic stability is investigated for a class of delay-free autonomous systems with the impulses of Δ ( + ) = 1 (( − ) − ), and a sufficient asymptotic stability condition is proposed involving the sizes of impulse input delays. In [19], Chen and Zheng considered more general impulses taking the form Δ ( + ) = ( ( − ), (( − ) − )) and obtained some criteria of exponential stability for nonlinear time-delay systems with delayed impulse effects.
However, most of the existing results of the stability for systems with delayed impulses were considered for the deterministic differential systems. It is noticed that many real world systems are disturbed by stochastic factors. Therefore, it seems interesting to study the stability of stochastic delay differential systems with delayed impulses. Recently, the exponential stability is investigated for impulsive stochastic functional differential system in [20], and exponential stability and uniform stability in terms of two measures were obtained for stochastic differential systems with delayed impulses. Motivated by the above works, the aim of this paper is to study th moment and almost sure exponential stability of a stochastic delay differential system with delayed impulses. It is shown that an unstable stochastic delay system can be successfully stabilized by delayed impulses. Moreover, it is also shown that if a continuous dynamic system is stable, then, under some conditions, the delayed impulses do not destroy the stability of the systems. Our results can generalize some existing results in [20,21].
The paper is organized as follows. In Section 2, we introduce the notations and definitions. We establish several stability criteria for stochastic differential delay systems with delayed impulses in Section 3. In Section 4, two examples are given to illustrate the effectiveness of our results.
Preliminaries
Throughout this paper, let (Ω, F, ) be a complete probability space with some filtration {F } ⩾0 satisfying the usual conditions (i.e., the filtration is increasing and right continuous while F 0 contains all null sets). Let = ( ( ), ≥ 0) be an -dimensional F -adapted Brownian motion. For ∈ R , | | denotes the Euclidean norm of . For −∞ < < < ∞, we say that a function from [ , ] to R is piecewise continuous, if the function has at most a finite number of jumps discontinuous on In this paper, we consider the following stochastic delay differential systems with delayed impulses: where { , ∈ N} is a strictly increasing sequence such that → ∞ as → ∞; { ≥ 0, ∈ N} are the impulsive input delays satisfying = max and = max{ , }. As a standing hypothesis, we assume that , , and are assumed to satisfy necessary assumptions so that, for any ∈ F ([− , 0]; R ), system (1) has a unique global solution, denoted by ( ; ), and, moreover, ( ; ) ∈ F ([− , 0]; R ).
The purpose of this paper is to discuss the stability of system (1). Let us begin with the following definition. Definition 1. The trivial solution of system (1) is said to be as follows.
(1) th moment exponentially stable, if, for any initial data or, equivalently, where and are positive constants independent of 0 .
Abstract and Applied Analysis 3
Main Results
Before establishing the main results, we derive the following lemma, which is useful to present the main results.
Lemma 2. Let assumptions
which implies Using the Gronwall inequality, it follows that According to (A 2 ), we get It follows that Hence, Repeating the above argument gives that, for ∈ [ 0 − , ], Since there are no impulses on ( , 0 + ], we obtain This completes the proof. When the continuous dynamics in system (1) is unstable, the following theorem shows that the system (1) can be stabilized by the delayed impulses. Theorem 3. Let the assumptions in Lemma 2 hold. Assume that there exist positive constants 1 , 2 , 1 , and and ≥ 1 such that where ∏ ∞ =1 < ∞; Then the trivial solution of system (1) is th moment exponentially stable.
Abstract and Applied Analysis 5 Remark 4. In Theorem 3, the positive constant is introduced in (H 3 ), where > 1 and ≤ 1 are allowed. As mentioned in [13], the constant is introduced in (H 3 ), which makes it possible to tolerate certain perturbations in the overall impulsive stabilization process; that is, it is not strictly required by Theorem 3 that each impulse contributes to stabilize the system; there can exist some destabilized impulses. Moreover, when 3 −2 = 1/2, 3 −1 = 1/2, 3 = 4, for ∈ N, we have Π ∞ =1 < 5 and ∑ ∞ =1 ( − 1) = +∞. Then, Theorem 3 can be used, but the results in [20,21] cannot be applicable to this case.
In the following theorem, we will show that if the continuous dynamics is stable, then, under some condition, the system is still stable with the delayed impulsive effects.
Theorem 5. Assume that the assumptions in Lemma 2 hold.
Suppose that there exist positive constants 1 , 2 , 2 , and and ≥ 1 such that Then the trivial solution of system (1) is th moment exponentially stable.
Remark 6. When the continuous system in system (1) is stable, the system (1) can always be stable with stabilized impulses. Thus, 1 + 2 < 1 is permissible in Theorem 5, and only one constraint − > 1 is assumed for constant . However, 1 + 2 ≥ 1 and 1 + 2̃> are necessary in Theorem 3.2 of [20]. Thus, in this aspect, Theorem 5 is more general than the results existing in [20].
The following theorem shows that the trivial solution of system (1) is almost sure exponentially stable, under some additional conditions. Theorem 7. Suppose that ≥ 1 and the conditions in Theorem 3 or Theorem 5 hold. Then, the trivial solution of system (1) is almost sure exponentially stable.
Proof. Using Theorem 3 or Theorem 5, we derive that the trivial solution of system (1) is th moment exponentially stable. Therefore, there exists a positive constant 1 such that It is obvious that Combining the Hölder inequality with (A 1 ) and (56) implies that By virtue of Burkholder-Davis-Gundy inequality, (A 1 ), and (56), we have where ( ) is a positive constant depending on only. Thanks to (A 2 ) and (56), we see that Substituting (58)-(60) into (57) gives that where 2 is a positive constant. Then for all ∈ (0, ) and ∈ N, we have Abstract and Applied Analysis 7 Using the Borel-Cantelli Lemma, we see that there exists an 0 ( ) such that, for almost all ∈ Ω, ≥ 0 ( ), where ≤ ≤ ( + 1) . It follows that Consequently, Let → 0; then the result follows.
Numerical Examples
In this section, two numerical examples are given to show the effectiveness of the main results derived in the preceding section.
Example 8. Consider a stochastic delay differential system with delayed impulses as follows: It can be seen in Figures 1 and 2 that unstable continuous dynamics of system (66) can be successfully stabilized by delayed impulses.
It can be seen from Figures 3 and 4 that the delayed impulses can robust the stability of the system (68).
Conclusion
The th moment and almost sure exponential stability are investigated in this paper. Using Razumikhin methods, several sufficient conditions are established for stability of stochastic delay differential systems with delayed impulses. Finally, two numerical simulation examples are offered to verify the effectiveness of the main results.
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper. | 2,501.8 | 2014-03-31T00:00:00.000 | [
"Mathematics"
] |
Novel Functional Role of NK3R Expression in the Potentiating Effects on Somatolactin α Autoregulation in grass carp pituitary cells
In our previous study, NKB/NK3R system has been shown to act at the pituitary level to up-regulate SLα synthesis and secretion in grass carp. However, whether NK3R expression can serve as a regulatory target at the pituitary level and contribute to NKB interactions with other SLα regulators is still unclear. In current study, using grass carp pituitary cells as a model, we have a novel finding that co-treatment of SLα/SLβ with carp TAC3 gene products, could induce a noticeable enhancement in SLα mRNA expression and these potentiating effects occurred with a parallel rise in NK3R transcript level after SLα/SLβ treatment. Interestingly, the stimulatory effects of SLα/SLβ on NK3R gene expression could be further potentiated by co-treatment with IGF-I/-II and simultaneous exposure of carp pituitary cells to SLα/SLβ and IGF-I/-II in the presence of TAC3 gene products was found to markedly elevated SLα mRNA expression (20 fold increase) and this synergistic stimulation was mediated by cAMP/PKA-, PLC/PKC- and Ca2+ -dependent cascades functionally coupled with NK3R activation. These findings suggest that local release of SLα via functional interactions with IGF-I/-II and TAC3/NK3R system may constitute a potent stimulatory signal for SLα gene expression in the carp pituitary via up-regulation of NK3R expression.
with cAMP/PKA-23 , PLC/IP 3 /PKC- 24 , and Ca 2+ -dependent 25 signaling pathways. In recent years, the functional role of NKB in puberty onset 26 and human fertility has aroused a lot of interest in the field of reproductive biology, mainly due to the findings that loss-of-function mutations in NKB or its receptor NK3R can lead to hypogonadotropic hypogonadismor and even infertility in human subjects [27][28][29] . Based on the studies in mammals (e.g., rodent and sheep), NKB was found to be a key regulator for GnRH pulsatility and downstream LH release via NK3R activation in kisspeptin neurons located in the arcuate nuclei within the hypothalamus 30,31 . Similar investigations have been recently extended to fish models with the novel findings that the TAC3 gene in fish species, e.g., in zebrafish [32][33][34] , goldfish 35 , tilapia 36 and grass carp 14 , not only encodes NKB as the gene product, but also the mature peptide of a new member of tachykinin called NKB-related peptide (NKBRP). Similar to NKB, NKBRP was also effective in stimulating LH release, e.g., in zebrafish 32 , goldfish 35 and tilapia 36 , suggesting the reproductive function of TAC3 gene products was well conserved throughout vertebrate evolution. Based on our recent in vitro studies in grass carp pituitary cells, interestingly, NKB and NKBRP were found to have no effect on LH secretion and LHβ & GtHα gene expression at the pituitary level but rather serve as novel stimulators for prolactin (PRL) and SLα secretion and gene expression via differential activation of NK2R and NK3R expressed in the carp pituitary 14 . The investigation on SLα regulation by TAC3 gene products in the carp model have become more exciting with the recent demonstration that IGF-I/-II could act in a synergistic manner with TAC3 gene products, namely NKB and NKBRP, to up-regulate SLα gene expression at the pituitary level and this potentiating effect could be paralleled with the concurrent rise in NK3R expression induced by IGF-I/-II treatment. These new findings demonstrated for the first time that NK3R expression at the pituitary level could serve as a regulatory target for modulation of pituitary hormone gene expression in vertebrate species.
In this study, using primary culture of grass carp pituitary cell as a model, the functional interactions between somatolactin autoregulation with IGF-I/-II and TAC3 gene products on SLα gene expression were examined in the carp pituitary with focus on the role of NK3R expression as a regulatory target at the pituitary level. As a first step, co-treatment of SLα /SLβ with either IGF-I/-II alone or TAC3 gene products alone including NKBa and NKBRPa (the gene products of carp TAC3a gene) or with a combination of both were performed to examine their effects on SLα mRNA expression in carp pituitary cells. The potentiating effects observed with SL/IGFs and SL/ TAC3 gene product co-treatment on SLα gene expression were correlated with parallel changes of NK3R mRNA expression induced by SLα /SLβ treatment alone or in combination with IGF-I/-II co-treatment. Using a pharmacological approach, the signal transduction mechanisms involved in SLα and SLβ induction of NK3R mRNA expression were elucidated and the functional role of NK3R expression and the post-receptor signaling pathways coupled with NK3R activation in the potentiating effects on SLα mRNA expression observed with cotreatment of IGF-I/-II and TAC3 gene products was also confirmed at the pituitary cell level. Our studies for the first time provide evidence that local release of SLα and SLβ could interact with IGF-I/-II and TAC3 gene products to up-regulate SLα gene expression in the carp pituitary via stimulation of NK3R expression at the pituitary level.
Results
Synergistic effects of somatolactin and TAC3a gene products on SLα mRNA expression. Given that (i) two somatolactin isoforms, SLα and SLβ , have been previously shown to trigger SLα secretion and gene expression at the pituitary level 7 , and (ii) TAC3a gene products could also stimulate SLα secretion and gene expression via activation of NK3R in carp pituitary cells 14 , we examined the functional interaction between TAC3a gene products and SLs in their stimulatory activity on SLα gene expression. As shown in Fig. 1a, static incubation with SLα (30 nM) and NKBRPa (1 μ M) alone were both effective in elevating SLα mRNA expression in carp pituitary cells in a time-dependent fashion. Interestingly enough, the stimulatory effect on SLα mRNA expression was markedly enhanced (up to 8 fold basal) especially after 24-48 hr of drug treatment with simultaneous exposure to both SLα (30 nM) and NKBRPa (1 μ M). In the single-dose experiment with drug treatment fixed at 24 hr, the potentiating effect (up to 8 fold basal) could still be observed with co-treatment of either SLα or SLβ (30 nM) with the carp TAC3a gene products including NKBa (1 μ M) and NKBRPa (1 μ M), respectively (Fig. 1b). In the parallel experiments, the synergistic action between NKBRPa and SLα was also confirmed by a concentration-response study. As shown in Fig. 1c, NKBRPa (1 μ M)-induced SLα mRNA expression was found to be aggravated in a dose-dependent manner with cotreatment of increasing level of SLα (0.01-100 nM). Similar dose-dependence of the potentiating effect was also noted in the reciprocal experiment with cotreatment of SLα (30 nM) with increasing levels of NKBRPa (0.1-1000 nM). In the case of SLβ regulation, IGF-I, SLα and SLβ alone could all trigger SLβ mRNA expression in grass carp pituitary cells, but static incubation with NKBa or NKBRPa were both not effective in elevating SLβ mRNA expression and secretion significantly. In addition, cotreatment with NKBa/NKBRPa and SLα could not potentially increase SLβ transcript levels (Fig.S1). To establish the functional link between SLα potentiation and NK3R at the pituitary level, the NK3R agonist senktide (1 μ M) was substituted for NKBa and NKBRPb in the potentiating study with SLα co-treatment. As shown in Fig. 1d, NK3R activation with senktide was found to mimic the synergistic effects of TAC3a gene products on SLα mRNA expression when give together with either SLα (30 nM) or SLβ (30 nM). Besides, co-treatment with NK3R antagonist SB222200 (10 μ M) not only could reduce the stimulatory actions on SLα mRNA expression induced by SLα (30 nM)/SLβ (30 nM) and NKBRPa (1 μ M) alone, but also significantly suppressed the potentiating effect induced by SLα /SLβ and NKBRPa co-treatment (Fig. 1e). These results indicated that the synergistic effect of SL and TAC3 gene products on SLα mRNA expression is dependent on NK3R activation. After drug treatment, total RNA was isolated for real-time PCR of SLα mRNA. In the data present (mean ± SEM), the groups denoted by different letters represent a significant difference at p < 0.05 (ANOVA followed by Dunnett's test).
Up-regulation of NK3R gene expression by
Scientific RepoRts | 6:36102 | DOI: 10.1038/srep36102 fashion (Fig. 2a). In parallel dose-dependent studies, a 24-hr incubation with increasing levels of SLα or SLβ (0.1-100 nM) also elevated NK3R mRNA expression in a dose-dependent manner (Fig. 2b). To further elucidate the signal transduction mechanisms for NK3R regulation by SLs, various pharmacological blockers targeting different pathways were recruited. As shown in Fig. 2c, SLα -or SLβ -induced NK3R transcript expression could be abolished by simultaneous incubation with the PI 3 K inhibitor wortmannin (1 μ M), Akt inhibitor HIMOC (10 μ M), or mTOR inhibitor rapamycin (20 nM). Similar results were also observed by preventing PI 3 K activation After drug treatment, total RNA was isolated for real-time PCR of NK3R mRNA. In the data present (Mean ± SEM), the groups denoted by different letters represent a significant difference at p < 0.05 (ANOVA followed by Dunnett's test).
Synergistic effects of IGF and SL on NK3R and SLα gene expression. Given our recent studies
that IGFs and SLs were both found to be effective in stimulating SLα and NK3R gene expression in carp pituitary cells 7,17 , the functional interaction between SLs and IGFs in the regulation of SLα and NK3R gene expression were examined in carp pituitary cells. As shown in Fig. 3a, SLα (30 nM) and IGF-I (50 nM) treatment alone could both significantly elevate NK3R and SLα mRNA expression in carp pituitary cells in a time-dependent manner. Interestingly, the stimulatory effects on NK3R and SLα mRNA expression were significantly enhanced (up to 8 fold for NK3R and 10 fold for SLα ) especially after 24-48 hr of drug treatment with co-treatment of SLα (30 nM) with IGF-I (50 nM). Following the time-course experiment, a single-dose experiment was performed at 24 hr. As shown in Fig. 3b, the potentiating effect could still be observed with co-treatment of either SLα (30 nM) or SLβ (30 nM) with IGF-I (50 nM) or IGF-II (50 nM), respectively. Besides, the functional interaction between SLα and IGF-I was further confirmed by a dose-response reciprocal reverse experiment. In this case, IGF-I-induced NK3R and SLα mRNA expression were found to be enhanced in a dose-dependent manner with simultaneous treatment with increasing concentrations of SLα (0.01-100 nM; Fig. 3c). The maximal responses occurred at 100 nM SLα (9 fold basal for NK3R and 11 fold basal for SLα ). Similar concentration-dependence of the potentiating effect was also observed in the reciprocal experiment with co-treatment of SLα (30 nM) with increasing levels of IGF-I (0.01-100 nM; Fig. 3d).
Synergistic effects of SLα, IGF-I and NKBRPa on SLα mRNA expression.
In the current study, the synergism between SL and IGF in stimulation of NK3R mRNA expression was noted in carp pituitary cells. So now we have an interesting idea, what will happen when we use a cocktail containing SL, IGF and NKB to challenge carp pituitary cells ? To answer this question, the cocktail of SLα , IGF-I and NKBRPa was used to test grass carp pituitary cells. As shown in Fig. 4a, IGF-I (50 nM), NKBRPa (1 μ M) and SLα (30 nM) treatment alone could stimulate SLα mRNA up to 24 hr, however, co-treatment of the three drugs together could significantly elevate SLα mRNA from 6 hr (up to 2 fold basal) to 48 hr (up to 20 fold basal). In the single-dose experiment with drug treatment fixed at 24 hr, the potentiating effect could still be observed with co-treatment of either SLα (30 nM) or SLβ (30 nM) with NKBRPa (1 μ M) and IGF-I (50 nM) (Fig. 4b). To clarify the mechanism responsible for the regulation of SLα mRNA expression by SLα /β co-treated with IGF-I and NKBRPa, a pharmacological approach was performed in carp pituitary cells. As a first step, NK3R agonist senktide (1 μ M) was recruited to replace the NKBRPa in the potentiating study with IGF-I (50 nM) and SLα (30 nM). As shown in Fig. 4c, senktide could mimic the synergistic effects of TAC3 gene products on SLα regulation when simultaneous incubation with either SLα /SLβ (30 nM) or SLα (30 nM) + IGF-I (50 nM)/SLβ (30 nM) + IGF-I (50 nM). Besides, co-treatment with NK3R antagonist SB222200 (10 μ M) not only could reduce the stimulatory actions on SLα mRNA expression induced by IGF-I (50 nM), NKBRPa (1 μ M) and SLα /SLβ (30 nM) alone, but also markedly suppressed the potentiating effect induced by SLα /SLβ + NKBRPa, SLα /SLβ + IGF-I and SLα /SLβ + IGF-I + NKBRPa (Fig. 4d). These findings, taken together, suggested that the synergistic effect of SLα /β , IGF-I and NKBRPa on the stimulation of SLα mRNA expression might be mediated through the activation of NK3R, which is a G protein coupled receptor coupled with activation of AC/cAMP/PKA, PLC/IP 3 /PKC, and Ca 2+ /CaM/CaMK-II cascades.
To clarify the signal transduction for the synergistic regulation of SLα mRNA expression, various pharmacological inhibitors/blockers targeting different pathways of NK3R were used. As a first step, the possible involvement of cAMP-dependent pathway was examined at the pituitary cell level. As shown in Fig. 5a, the AC-inhibitor MDL12330A (10 μ M) or PKA inhibitor H89 (10 μ M) could block the synergistic effects of SLα , IGF-I and NKBRPa on the induction of SLα mRNA expression. To shed light on the role of PLC-dependent cascade in the synergistic actions, SLα regulation by SLα , IGF-I and NKBRPa were tested with inhibitors for individual components of this pathway. In this case, the synergistic effects of SLα , IGF-I and NKBRPa on SLα gene expression were observed to be suppressed/abolished by simultaneous incubation with the PLC inhibitor U73122 (10 μ M) or PKC inhibitor GF109203X (10 μ M), respectively (Fig. 5b). To examine the possible role of Ca 2+ -dependent cascade in SLα regulation by the cocktail containing SLα , IGF-I and NKBRPa, the synergistic effects were also tested with various inhibitors for Ca 2+ pathway. In this case, this cocktail-induced SLα mRNA expression were found to be attenuated/abolished by incubation with Ca 2+ free medium or co-treatment with the voltage sensitive calcium channel (VSCC) inhibitor nifedipine (10 μ M), respectively (Fig. 5c).
Discussion
At present, except for a single study in carp pituitary cells showing that SLα and SLβ play a stimulatory role in autocrine/paracrine regulation of SLα secretion and synthesis in grass carp 7 , little is known about the functional role of SLs at the pituitary level. Since (i) carp SLα and SLβ were found to be effective in triggering SLα secretion, protein production and gene expression in carp SLα cells 7 , (ii) carp NK3R was specifically expressed in SLα cells within the NIL lobe of the carp pituitary 14 , we speculate that SLs may play a role on NK3R regulation at the pituitary level, which may have a functional impact on SLα expression in carp pituitary cells. In the present study, using grass carp pituitary cells as a model, we demonstrated for the first time that SLα and SLβ can up-regulate NK3R gene expression in time-and dose-dependent manner via direct actions at the pituitary level. In fish models, SL receptor has been identified as a member of the Type I GHR family 37 , and its activation can lead To test the possible involvement of these signaling cascades in SL-induced NK3R, a pharmacological approach using the inhibitors for the respective pathways was used. In carp pituitary cells, the stimulatory effects on NK3R mRNA expression induced by SLα or SLβ treatment were either totally abolished or partially suppressed by the . After drug treatment, total RNA was isolated for real-time PCR of SLα mRNA. In the data present (mean ± SEM), the groups denoted by different letters represent a significant difference at p < 0.05 (ANOVA followed by Dunnett's test).
In our recent in vitro studies, we have demonstrated that (i) SLα and SLβ could both elevate SLα mRNA expression in carp pituitary cells 7 , and (ii) carp TAC3a gene products, namely NKBa and NKBRPa, could up-regulate SLα gene expression via activation of NK3R expressed in the carp pituitary 14 . In our initial attempt of investigate the functional interactions between SLs and TAC3 gene products on SLα expression, we have the novel findings that co-treatment of SLα /SLβ with either NKBa or NKBRPa, respectively, could trigger a synergistic effect on SLα mRNA expression in a time-and dose-dependent manner. These potentiating effect could be mimicked by replacing TAC3 gene products with the NK3R agonist senktide and blocked by simultaneous incubation with the NK3R antagonist SB222200, suggesting the possible dependence of the synergistic effect on NK3R expression at the pituitary level. Together with our current finding of SLα /SLβ up-regulation of NK3R gene expression in carp pituitary cells, it raises the possibility that SL treatment may enhance the stimulatory effect of TAC3 gene products on SLα gene expression by increasing NK3R expression in the carp pituitary. In previous studies, the interactions of glucagon with GH have been reported in different species, e.g., in rat 38 and grass carp 39 . In rat hepatocytes, cotreatment of glucagon and GH is known to have a potentiating effect on the stimulation of IGF-I mRNA expression 38 . In grass carp, our previous studies have also shown that glucagon could potentiate GH-induced IGF-I gene expression via up-regulation of GHR expression in carp hepatocytes 39 . Apparently, potentiation of the bioactivity of various members of the GH family lineage in fish species can occur by functional interactions with GPCR activation by increasing receptor expression mediated the respective stimulating influence at the cellular level.
In cancer cell models, e.g., pancreatic cancer cells, functional crosstalk of the post-receptor signaling between IGF-I/Insulin receptor with GPCR (e.g., neurotensin receptor and type I angiotensin receptor) has been reported 40 . In wound repairing of the rabbit cornea, co-treatment of IGF-I with SP is known to have a potentiating effect on the migration of corneal epithelial cells 41,42 . In our current study with carp pituitary cells, SLα and SLβ not only could potentiate SLα mRNA expression induced by TAC3 gene products, but also notably enhance the stimulatory effect caused by IGF-I/-II cotreatment on SLα mRNA expression. Interestingly enough, simultaneous treatment of carp pituitary cells of SLα /SLβ with IGF-I and NKBRPa was found to markedly increase basal levels of SLα mRNA up to 20 fold basal, which was much higher than the corresponding SLα responses induced by SLα /SLβ cotreatment with either IGF-I/-II (up to 8 fold basal) or TAC3 gene products NKBa and NKBRPa (up to 8 fold basal), respectively. This notable increase in the potentiating effect caused by simultaneous treatment of SLα , IGF-I and NKBRPa together also occurred with a novel finding in carp pituitary cells, in which cotreatment with SLα /SLβ could potentiate the stimulatory effect of IGFs on NK3R gene expression at the pituitary level. Similar to our results with SL cotreatment with TAC3 gene products, the highly potent synergistic effect on SLα gene expression induced by simultaneous treatment of the three stimulators together could be mimicked by substituting the NK3R agonist senktide for NKBRPa and blocked by cotreatment with the NK3R antagonist SB222200. These findings strongly suggest that the potentiating effect caused by the three SL stimulators is highly depended on NK3R expression in the carp pituitary. NK3R is a member of the rhodopsin-type class I type G-protein coupled receptors (GPCRs), and in mammals its activation can trigger intracellular signaling via G o and G q/ 11 25,43 followed by cAMP production 44,45 , PLC-dependent PI hydrolysis 23,24 , mobilization of IP 3 -sensitive intracellular Ca 2+ ([Ca 2+ ] i ) 24 and extracellular Ca 2+ ([Ca 2+ ] e ) entry via voltage-dependent Ca 2+ channels 25,46 . In carp pituitary cells, our recent studies have also demonstrated that NKBa/NKBRPa could stimulate SLα gene expression by NK3R activation via AC/cAMP/PKA, PLC/IP 3 /PKC, and Ca 2+ /CaM/CaMK-II pathways 14 . Consistent with these previous findings, blocking the respective post-receptor signaling pathways using the AC inhibitor MDL12330A, PKA inactivator H89, PLC blocker U73122, PKC inhibitor GF109203X, removal of [Ca 2+ ] e with a Ca 2+ -free culture medium, and inactivating voltage-sensitive Ca 2+ channel using nifedipine were all effective in inhibiting/blocking the highly potent synergistic effect on SLα gene expression caused by simultaneous stimulation with SLα , IGF-I and NKBRPa. These results, as a whole, provide evidence that SLα and SLβ can synergize with IGF-I/-II potentiation on SLα gene expression induced by TAC3 gene products by up-regulation of NK3R expression in the carp pituitary.
In summary, we have demonstrated for the first time that SLα and SLβ could act at the pituitary cell level to potentiate the stimulatory effects of TAC3 gene products and IGF-I/-II on SLα gene expression via up-regulation of pituitary NK3R expression. The stimulation on NK3R gene expression probably was mediated through the JAK 2 /STAT 5 , MAPK and PI 3 K/Akt cascades. In this study, we also have a novel findings that simultaneous treatment with SL, IGF and TAC3 gene product together could serve as a highly potent stimulatory signal for SLα gene expression and this stimulatory effect was dependent on NK3R expression in the carp pituitary and involved the activation of post-receptor signaling cascades, namely the AC/cAMP/PKA, PLC/PKC and Ca 2+ -dependent pathways, coupled to NK3R stimulation. Since SL autoregulation via local release of SLα and SLβ has been recently demonstrated in the carp pituitary 7 . Our findings suggest that SLα and SLβ released at the pituitary level may act in an autocrine/paracrine manner to modulate the pituitary sensitivity to the synergistic stimulation on SLα expression triggered by IGF-I/-II and TAC3 gene products via up-regulation of NK3R expression in the carp pituitary.
Animals.
One-year-old (1 + ) grass carps (Ctenopharyngodon idellus) with body weight ranging from 2.0 to 3.0 kg were bought from local markets and maintained in well aerated 250L aquaria at 20 ± 2 °C under a 12L:12D photoperiod. Given the carps at this stage are pre-pubertal and sexual dimorphism is not apparent, fish of mixed Scientific RepoRts | 6:36102 | DOI: 10.1038/srep36102 sexes were used for preparation of pituitary cell cultures. All animal experiments were conducted in accordance with the guidelines and approval of the respective Animal Research and Ethics committees of the University of Hong Kong and Huazhong Agricultural University.
Reagents. Recombinant proteins of grass carp SLα and SLβ were expressed in E. coli, purified and functionally characterized as described previously 7 . The two hormones were dissolved in PBS and stored frozen at − 80 °C as 0.1 mM stocks in small aliquots. Grass carp NKBa and NKBRPa were synthesized from GenScript using the automated solid-phase method, and the carboxyl-terminus of individual peptides was amidated. These peptides were dissoved in DMSO, and stored frozen at − 80 °C as 1 mM stocks in small aliquots. Human IGF-I and IGF-II were purchased from Sigma and dissolved in double-distilled deionized water and stored as 0.1 mM stocks in small aliquots at − 80 °C. Pharmacological agents, including MDL12330A, H89, GF109203X, U73122, nifedifine, Ly294002, wortmannin, rapamycin, HIMOC, U0126, PD168916, HEX, and IQDMA were acquired from Calbiochem, while senktide and SB222200 were purchased from Tocris. Similar to the peptides, these pharmacological agents were prepared as a high concentration frozen stock in small aliquots and diluted with pre-warmed culture medium to appropriate concentrations 15 min prior to drug treatment.
Primary culture of grass carp pituitary cells. Grass carp pituitary cells were prepared by trypsin/DNase digestion method as described previously 47 . Briefly, pituitaries were excised from grass carp and diced into 0.5-mm fragments using a McILwain tissue chopper. After 30-min trypsin digestion with constant shaking at 28 °C, pituitary fragments were suspended in Ca 2+ -free MEM supplemented with DNase I (0.01 mg/ml, Sigma). Pituitary cells were dispersed by gently trituration and filtered through a sterile nylon mesh (pore size: 20 μ M) to remove undigested fragments/debris. After that, the cells were harvested by centrifugation at 1000 rpm for 10 min and re-suspended in MEM medium. Total cell yield and percentage viability were estimated by cell counting in the presence of trypan blue using a hematocytometer.
Measurement of carp SLα and NK3R mRNA expression. Grass carp pituitary cells were seeded
in poly-D-lysine coated 24-well culture plates at a density of ~2.5 × 10 6 /ml/well. On the following day, drug treatment was initiated by replacing the old medium with MEM containing appropriate levels of test substances. After drug treatment, total RNA was extracted from individual well using Trizol and reversely transcribed by Superscript II (50 Unit, Invitrogen). The RT samples obtained were subjected to qPCR using a LightCycler SYBR Green I Kit (Roche) with primers specific for grass carp SLα (Forward Primer: 5′-ACCCACT GTACTTCAATCTCC-3′ ; Reverse Primer: 5′-CGTCGTAACGATCAAGAGTAG-3′ ) and NK3R (Forward Primer: 5′-GCCAAGAGAAAGGTTGTGAAGA-3′; Reverse Primer: 5′-GTGTACATGCTGCTCTGGCG-3′ ), respectively. PCR cycling parameters for SLα and NK3R mRNA detection were set at 94 °C for 3 min followed by 35 cycles of amplification with denaturation at 94 °C for 30 sec, annealing at 52 °C for SLα mRNA or 56 °C for NK3R mRNA for 30 sec, and extension at 72 °C for 30 sec. Signal detection was routinely set for 20 seconds at 84 °C for SLα and 86 °C for NK3R, respectively. In these studies, serial dilutions of plasmid DNA containing the ORF of SLα (GeneBank no: EF372074) and NK3R (GenBank no: JQ254913) cDNA were used as the standards for data calibration. Parallel qPCR measurement of β -actin was also conducted in individual experiment to serve as the internal control.
Data transformation and statistical analysis. For real-time PCR measurement of NK3R and SLα mRNA, standard curves with a dynamic range of ≥ 10 5 and a correlation coefficient ≥ 0.95 were used for data calibration with RotorGene-Q software 1.7 (Qiagen) under unsupervised mode. Since no significant changes were noted for β -actin mRNA levels between different experiment groups in our studies, SLα and NK3R mRNA data were simply transformed as a percentage of the mean value in the control group without drug treatment (as "%Ctrl"). The data presented (as Mean ± SEM) were pooled results from 6-8 separate experiments and analyzed with ANOVA followed by Dunnett's test using Prism 6.0 and differences between groups were considered as significant at P < 0.05. | 6,137.6 | 2016-10-27T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Export Trade Incentive Schemes Nexus Export Performance in Ethiopia
This paper pore over the role of export trade incentives schemes on export performance there on Economic growth. Data used in the paper is collected from different offices and websites like Ethiopian Revenue and Customs Authority, National Bank of Ethiopia, and National Bank of Ethiopia and Development Bank of Ethiopia. Data collected is analyzed using tables and correlation analysis. The result of the study revealed that government uses Fiscal as well as financial schemes as export incentives. Following the incentives the total export value and volume of major export items is increasing from time to time. Correlation analysis result shows that export value and fiscal incentives have positive relationship whereas; the correlation between export value and financial incentives is negative. The government is expected to do more in provision of both financial and fiscal incentive schemes so as to strengthen the contribution of this sector to the economy.
Introduction
The theory of international trade is one of the oldest branches of the economic thought. Exporting is a major component of the international trade. Exporting hasan important role for the growth of the economic as a whole as well as for individual firms since Mercantilist era. Increasing export is a key concern for developing countries and it boost profitability; enhances production efficiency and competitiveness, developresources utilization capacity, provide employment and improve trade balances of countries' (Soprana, 2011) and (Hailu, 2012).
Generally, there are two broad types of exporting economies: merchandise/goods export and service export. Export of a good happens when there is a change of ownership from a resident to a non-resident; this does not necessarily imply that the good physically cross the frontier. However, in specific cases national accounts impute changes of ownership even though in legal terms no change of ownership take place (WTO, 2010).
Like other nations, Ethiopia has an experience both in service and goods export. But this paper looks only good export. Currently, there are about 35 goods export items in Ethiopia. Based on their value of export, the principal commodity exports are coffee, gold, oilseeds, and pulse (ECRA, 2012).
A common objective in most countries today is to find ways to increase exports. This can be achieved by encouraging exporting firms to export more and by inducing non-exporters to begin exporting (Zafar, 2006). These objectives often encountered with different barriers in penetrating into foreign market. According to Hibbert (1990), export barriers include tariff barriers, which lead increase in cost of production, informational barriers about international market and inadequate knowledge about the international market condition. In this regard, international experience suggests that problems of penetrating into international markets due to these barriers can be minimized through effective government support in the form of export promotion. Export trade incentive is the most common export promotion instrument, which is composed of financial, fiscal and other non-monetary incentives (Pfaffermayr, 1996). Yet, in providing such incentives to exports, procedures and trends need to carefully designed, implemented and assessed to minimize the considerable revenue forgone associated with such provisions (Biggs, 2007). This paper, examines the trend of goods and service export as well as export trade incentive schemes and the role of such incentives on goods export growth in Ethiopia.
Despite the introduction of the export trade incentive schemes in Ethiopia, the export performance of the country is still low which shows the export percentage of GDP is very low comparing with Sub-Saharan African countries (ECRA, 2011;WTO, 2010). The effectiveness of export trade incentives on the growth of goods and service export of the country is still an open question. These include an intriguing question as to whether the export incentive schemes have robustly helped in improving the export of goods and services in the country.
The other wing of the study is to examine the effect of export trade incentives on the growth of goods export in Ethiopia. Here, also there are very little empirical studies conducted. Therefore, this study designed to fill such study gaps in the country and the area of the study. Hence, to address this issue, the following general guiding questions are formulated:-What is the trend of export goods growth in Ethiopia? What are the export trade incentives schemes provided to enhance goods exports in Ethiopia?
What is the trend of revenue forgone as a cost of export trade enhancement in Ethiopia? Do export trade incentive schemes have effect on growth of service and goods export in Ethiopia?
Objectives of the study
The objective of the study is to investigate therole of export trade incentive schemes on of export growth in Ethiopia. Specifically the following are the objectives.
To examine the trend of export growth in Ethiopia To identify the export trade incentive schemes provided to enhanceexport growth in Ethiopia, To assess trend of revenue forgone as cost of export trade enhancement in Ethiopia, To examine the effect of export trade incentive schemes on growth of export growth in Ethiopia.
LiteraturesReview
Export trade incentives-are the government fiscal and financial policy designed to encourage domestic firm to produce and export more to foreign market. Export incentives are all arrangements made by the government to help exporters to overcome the constraints that they may encounter in their export transactions and everything that can help a dynamic system for promoting foreign sales (Hibbert, 1990).
Having recognized the importance of export trade incentive and the existence of barriers to exporting firm's there arevarious incentive policy designed to help exporter overcame barrier and became competitive(World Bank, ITD 2009). Export trade boosts the export performance of the country in that way helps to balance the trade deficit which is one of the main problem economic developments.
De Wulf and Sokol (2005) explained that the main objective of export trade incentive is to get rid of tariff burden on inputs and to increase access to industrial input.It is true that different countries may have difference in objective priority. In most developed nations the primary objective of providing export trade incentive is to overcome the balance of trade deficit via promoting export trade but developing countries the primary objective to overcome the shortage of foreign currency. Similarly, in Ethiopia the primary aim of providing export trade incentive is to generate foreign currency earning, ensuring economic development by accelerating industrial growth of the country and to improving the foreign exchange earning needed for development and investment.
Export trade incentive and economic growth
Exporting is an import once factor of economic growth, and therefore export promotion is a critical consideration for economic development of any country. Many people strongly support export promotion strategy through incentives that benefits both developed and developing countries. Among the usual advantages of the export promotion are foreign exchange earnings, exposure of domestic firms of international market; consequently increase in their efficiency transfer of technology, generate greater capacity of resources utilization of which the economy has comparative advantage, to take advantage of economic of scale, create employment and increase labor productivity, relation of current account pressures for foreign capital goods by increasing the country's external earning and attracting foreign investment and to increase the total factor productivity and consequently the well being of the country (Emilo,2002).
Exporters need to be competitive in the world market in order to be completive, the government should support through export trade incentive which encourage greater competition and associated with productivity gains, technological knowhow and knowledge. Therefore, incentives is important to promote efficient resource allocation among sectors and to move to whole economy to competitive market.
Based on the foregoing discussion, thus, it can be concluded export promotion development polices is popular policy for achieving rapid economic growth, through diversified export which enables developing countries to accomplish stability. Export trade incentive are of different types.
The first one is Fiscal incentives that includes all measures taken to reduce disincentives to export efforts caused by duty or other change on export, duties on import required for production of exports: duties on import of materials and components required for the production of manufactured goods as well as duties on production that add un necessary cost to the selling price of export products (Hibbert, 1990). According to oyejide (2007), fiscal incentives schemes such as duty drawback and exemptions, manufacturing under bonded warehouse and establishment of export free zone are considered 'compensatory' which are targeted to eliminate disincentives raised from economy's trade, investment and exchange regimes by from economy's trade, investment and exchange regimes by assuring equal footing with foreign competitions in terms of access of input at world market prices.
The second types of incentive is financial incentives thatdesigned to make export business attractive through compensation for price disadvantage (Hibbert, 1990). Such category of incentive includes direct /indirect cash subsidies; export credit facilities for pre-shipment and post shipment transactions; special foreign exchange allocation and remission of tax normally chargeable on profit. Financial incentives enables exporters to eliminate their financial constraints (Morrissey, 2012).
The practice of giving export incentives is nearly universal, however, the extent and the form of export incentive vary from country to country (Ahuja, 2001).According to (Lennon, 2009) manufacturers in Asia are benefited from either incentive methods and enhances their export status. In the same fashion, in South Africa the implementation of export incentives increases good export by large percent from year to year.
In Ethiopia also, the government acknowledges the importance of increasing and diversifying the country's export to ease foreign currency shortage along a free market -based economic path. The governmentrenounced taxes on nearly all variety of products exported (Geda, 2002).
There are different proclamations in Ethiopia that dictates about those incentives. Accordingly, the fiscal incentive scheme includes duty drawback scheme, voucher scheme, bounded manufacturing warehouse scheme, bonded input supplies warehouse scheme. Bounded export factory and industrial zone schemes. Financial trade incentive scheme to exporter are export credit guarantee scheme, foreign exchange retention scheme and external loan and supplier or foreign partners' credit Generally, these are export trade incentive scheme applied by the government of Ethiopia to improve the foreign currency reserve of the country by encouraging exporters expecting its return in improving economic growth as well as economic development of the country.
4.Methodology of the study
The study uses atrend analysis and correlation analysis for the period 2007/08-2011/12.To conduct this study five years time series secondary quantitative data was used and the data was collected from Ethiopian revenues and customs authority(ERCA), National bank of Ethiopia (NBE), development bank ofEthiopia (DBE) and Ministry of industry (MOI).
The Export data of the country andthe fiscal export incentive data duringthe specified period were collected from the report and data base of ERCA and NBE; data related to financial incentive was collected from NBE and DBE; fiscal incentive beneficiaries and other related data were collected from MOI and real exchange rate from NBE.
In the study, the independent variables are the incentives which are the aggregate monetary and fiscal incentives provided to exporters each year in terms of local currency Birr which is found by adding the revenue forgone under each fiscal incentive schemes. Financial incentives which are the annual monetary financial incentive provided to exporters to minimize their financial problem.The dependent variables of the study is the export growth expressed in termsof export value and export volume.
Trend of total export value from 2007/08-2011/12
The trend of total export value( items like Coffee, oilseeds, Leather and leather products, Pulses, Meat and meat product , Fruits and vegetables , Live animals, Chat, Gold, Flower and others included) growth of the country shows an increment year after year. During the study period total sum of 10816.70 million dollars was obtained from goods and service export. Coffee being the major exporting item shows increasing trend in value during the study period. This is contributed to arrays of export incentives that enable the exporters to increase the amount of export from time to time.
As indicated in the above table 4.1 during the period ofthestudy Ethiopian total export trade values has shown an increase from 1465.90 million dollars in 2007/08 to 3152.60 million dollars in 2011/12 with an annual average growth rate of 22.26 percent. The lowest total export value growth rate was registered in 2008/09 which was -1.23 percent and the highest growth rate was registered in 2009/10 which is 38.35 percent. From this it can be concluded that during the study period total export value of Ethiopia has a positive increment trend.
Trend of total export volume from 2007/08-2011/12
The volume of export is one aspect of countries export to generate adequate foreign currency growth has been recorded in total export volumeof Ethiopia even though it is not uniformly distributed across the different export items and fluctuated with in the period. An increase in export volume supported by stable price helps to generate adequate foreign currency. In the period, a total of 4469.94 million of Kg goods are exported to different parts of the world. Generally, it is observed that there is increasing volume of export during the period.
Export trade incentive scheme in Ethiopia
The trend of trade incentive scheme given to promote export of the country is important in order to ensure economic development and to improve foreign exchange earning needed for economic development.
Fiscal export trade incentive scheme
This incentive is the main export trade incentive scheme provided comprehensively by the government of Ethiopia. The most used fiscal incentive sachems are the duty draw Back scheme and voucher scheme.
Regarding draw back scheme, as toproclamation number 768/2012, exporters entitled to be benefited from this scheme are producer exporter wholly, partially or occasionally engaged in exporting their products; indirect producer exporters wholly, partially or occasionally supplying their products to producer exporters or exporters in the form of raw material or finished goods; raw material suppliers supplying imported raw material to producer exporters without processing it ; re-exporting commodities or raw materials , they have imported upon payment of duties for being in conformity with purchase specification, damaged ,short delivered.
The dutyand other direct taxes paid at the time of importation of raw materials and the VAT(value added tax) paid on domestic purchases shall be refunded to the beneficiaries of the scheme upon the fulfillment of conditions laid down in the proclamation. The refunded is expected to be made within six months from the date of the export. This creates financial constraint to the exporters since the duty and other indirect taxes paid at a time importation and acquisition of raw materials. ; during the period of 2007/08-2011/12 the amount of duties and taxes forgone under the drawback scheme is 165.7 million birr. This is due to the increase in value and volume of goods export growth in the specified period. In Ethiopia this scheme is not undertaken in a well coordinated way; there is a problem regarding data of exporters and the refund amount. In addition, the time taken for refund is also not short even though there is an improvement in recent years (ERCA, 2011).
The trend of revenue forgone under draw back scheme
This scheme is the functional scheme which is provided for persons and organizations who have obtained legal certificate issued by the ministry of industry when fulfill the requirements specified in the proclamation and the voucher book is granted by ERCA.
The voucher scheme allows exporters to be exempted from customs duties and taxes before the importation of inputs used to produce export goods which enable exporters to access export inputs at world market price and thereby increase productivity of revenue from export. According to proclamation 768/2012, exporters who fail to export products abroad and sold to local market are liable to pay duty and taxes including penalty. During the period 2008 to 2012, the Ethiopian government has forgone a total of birr 5366 million birr through voucher scheme. The scheme has an increment trend which indicates exporters are attracted by this scheme.
Financial export trade incentive scheme
Financial export trade incentive scheme is other form of export trade promotion which is provided by the government of Ethiopia. The reason for this incentive is to minimize the financial constraint of exporters.
There are two main interrelated financial export trade incentives scheme provided by the government for exporters. Such as export credit guarantee scheme and export credit scheme according to the directive of National Bank of Ethiopia of the export credit.
Export credit guarantee scheme is one of the financial export trade incentives schemes which finance the exporters pre-shipment and post-shipment. This is a guarantee given by National Bank of Ethiopia and later developed by Development Bank of Ethiopia. The bank plays as collateral between exporters and government and private banks to get credit for their export activities. This enables the national exporters compete with equal footing with other exporters at international market. Table 4.4 shows that the amount of loan provided under export credit guarantee for the exporter from2008-2012. Is totaledto birr 2126 million. This loan is provided by different government and private banks. Export credit is the other method of financing exports. Credit is provided for exporters as per National Bank of Ethiopia. From 2007/08-2011/12 a total of birr 19302.91million was given for loan for goods exporters. As illustrated in the table 4.4 the loan given to the exporters was increasing. Correlation analysis between export value and fiscal incentives is computed to be 0.822 that shows there is a positive relationship during the study period. This result tell that with the increase in fiscal incentive, export also increases. In other computation,Correlation analysis between export value and financial incentivesis -0.893 which shows there is a negative relationship between the two.
Conclusions
The trend analysis revealed that the country's export earnings from goods export showsgrowth as a result of export incentives. Volume of goods exports, the average total goods export volume of a country in the specified period with a comprehensive export trade incentive scheme has shown an increment. Volume and value of export is increasing while the credit guarantee was decreasing this show that the inverse relation between the Credit guarantee and the export.The correlation analysis showed that there is a positive correlation between goods exports value and fiscal incentives, and negative relationship between value of export and financial incentives.
Recommendation
To encourage the export growth of the country, the government has to strengthen fiscal incentives and should have to reduce financial incentive schemes.
According to the study, there is a positive relationship between export fiscal trade incentive schemes and growth of export value which have impacts on economic growth. The policy implication is that policy makers should be careful in designing policies and the respective organizations should properlyImplementand administer the incentive schemes to promote export and should have continuous modernization process. Government institutions that implement the export trade incentives should properly implement automation system to reduce the cost of exporters, to attain proper records, and fast and effective service to the exporters. | 4,393.2 | 2020-09-01T00:00:00.000 | [
"Economics"
] |
YscU/FlhB of Yersinia pseudotuberculosis Harbors a C-terminal Type III Secretion Signal*
Background: After auto-proteolysis and dissociation, YscUCC is secreted by the T3SS of Yersinia pseudotuberculosis. Results: YscUCC harbors a specific C-terminal T3S signal and its deletion triggers an increase of YscF secretion without affecting Yops secretion. Conclusion: C-terminal end of YscU participate in YscF secretion regulation but not in the substrate specificity switch. Significance: This is the first report of a C-terminal T3S signal. All type III secretion systems (T3SS) harbor a member of the YscU/FlhB family of proteins that is characterized by an auto-proteolytic process that occurs at a conserved cytoplasmic NPTH motif. We have previously demonstrated that YscUCC, the C-terminal peptide generated by auto-proteolysis of Yersinia pseudotuberculosis YscU, is secreted by the T3SS when bacteria are grown in Ca2+-depleted medium at 37 °C. Here, we investigated the secretion of this early T3S-substrate and showed that YscUCC encompasses a specific C-terminal T3S signal within the 15 last residues (U15). U15 promoted C-terminal secretion of reporter proteins like GST and YopE lacking its native secretion signal. Similar to the “classical” N-terminal secretion signal, U15 interacted with the ATPase YscN. Although U15 is critical for YscUCC secretion, deletion of the C-terminal secretion signal of YscUCC did neither affect Yop secretion nor Yop translocation. However, these deletions resulted in increased secretion of YscF, the needle subunit. Thus, these results suggest that YscU via its C-terminal secretion signal is involved in regulation of the YscF secretion.
Many pathogenic and symbiotic Gram-negative bacteria make use of the type III secretion system (T3SS) 2 to export and translocate effector proteins into eukaryotic host cells during infection (1). Human pathogenic Yersinia spp. Y. pestis, Y. enterocolitica, and Y. pseudotuberculosis possess a common virulence plasmid that encodes the T3S-components (Ysc proteins) as well as the effector and translocator proteins (Yop proteins) (2). Once translocated into eukaryotic cell, Yop effectors counteract the host immune defense mechanisms such as phagocytosis and apoptosis to promote bacterial survival and spreading (3). These processes are highly regulated, and it has been shown that Y. pseudotuberculosis up-regulates Yop expression after the contact with eukaryotic cells has been established (4). This target cell contact can be mimicked in vitro by depletion of calcium from the growth medium and a shift of the growing bacteria from 26°C to 37°C, resulting in a massive Yop secretion into the culture supernatant (5,6). The role of calcium in Yop regulation has been an invaluable tool to investigate regulation of yop genes as well as Yop secretion in Yersinia spp.
The hallmark of the T3SS is the needle complex that is formed by a base multi rings substructure spanning the bacterial envelope and a needle-shaped structure emerging from the bacterial surface (7)(8)(9). The base of the needle complex forms a hollow structure through which the T3SS substrates are passing in an unfolded state (10). During assembly, the basal body is first built up by generating a machinery precursor that is secretion competent only for the early substrates that are components involved in the needle and inner rod formation (e.g. YscF, YscI, and YscP). After activation of the T3SS, the secretion of early substrates is modulated in favor of the secretion of translocator and effector proteins, the so called middle and late substrates, respectively (11,12). This phenomenon is described as the substrate specificity switch that was first identified by Macnab and coworkers in the flagellum (13,14).
Effector proteins have a secretion signal at their N-terminal region that allows their specific recognition by the machinery (15). It was previously suggested that the secretion signal is guided by the m-RNA as an alternative of the protein, but today, the general consensus is that the secretion signal resides within the first 20 residues of the secreted substrates (16,17). Despite extensive work devoted to the T3S signal characterization, no consensus sequence has yet been identified. However, sequence comparisons showed that N-terminal region of T3S-substrates are unstructured with an elevated solvent accessibility (18 -20). Also, an enrichment in some residues like serine has been observed in T3S-substrates N-terminal region (21). These characteristics provide the secretion signal with a high tolerance for introduction of point mutations that barely affect the secretion efficiency (22). These features of the T3S signal were used to develop prediction software to identify T3S-substrates (20,23). In Yersinia, the T3S-ATPase YscN, recognizes the secretion signal of the effector proteins when they are in complex with their cognate chaperone (24). It has been show that InvC, the YscN homologue in Salmonella enterica triggers the dissociation of the chaperone/effector complex in an ATP-dependent manner, which subsequently allows effector secretion (25).
Recently, we identified YscU CC as a new early substrate of Y. pseudotuberculosis T3SS (26). YscU is an inner membrane protein with a large cytoplasmic domain denoted YscU C (27). YscU belongs to the FlhB family of proteins found in all T3SSs. This class of proteins is characterized by an auto-proteolytic process that occurs at a conserved NPTH motif within the cytoplasmic domain (28). The cleavage of YscU between Asp 263 and Pro 264 of the NPTH motif generates a 10-kDa C-terminal peptide, named YscU CC (29 -31). After cleavage, YscU CC forms a stable complex with YscU CN , the N-terminal part of the cytoplasmic domain that is linked to the N-terminal membrane domain of YscU (32). Auto-proteolysis of YscU is essential for proper yop expression and Yop secretion suggesting that YscU is critical for calcium regulation of the Yersinia T3SS (11,31). In addition, yscU mutants affected in auto proteolysis interfere in YscF needle formation, a property that is shared with YscP (11,31). Both YscP and YscU (respectively FliK and FlhB in the flagellum) have been linked to the so called substrate specificity switch (33,34).
Previously, we showed by a combination of in vivo and in vitro experiments that YscU auto-proteolysis followed by dissociation and secretion of YscU CC is essential for Yop secretion in Y. pseudotuberculosis during growth in Ca 2ϩ -depleted medium (26). Indeed, we showed that depletion of calcium induced intra-molecular dissociation of YscU CC from the remaining part of YscU. Importantly, addition of ϳ1 mM Ca 2ϩ blocked secretion of both YscU CC and Yops. These results prompted us to suggest that YscU is the actual Ca 2ϩ -sensor in Yersinia T3SS (26). We have here studied the secretion of the early substrate YscU CC and showed that YscU CC encompasses a specific C-terminal T3S signal sequence corresponding to the last 15 residues of the protein (U 15 ). C-terminal U 15 can promote secretion of reporter proteins like glutathione S-transferase as well as YopE lacking its native T3S signal sequence. In addition, we showed that U 15 interacts with the ATPase YscN in an orientation-dependent manner. Moreover, our data highlighted an involvement of the YscU C-terminal end in the regulation of YscF secretion, the needle subunit.
Experimental Procedures
Bacterial Strains, Plasmids, and Growth Conditions-Bacterial strains and plasmids used in this study are listed in supplemental Table S1. Standard molecular biology methods were used to generate the different plasmid constructs used in this study. The PCR primers used in the different cloning strategy are listed in supplemental Table S2. For the complementation assay of YopE secretion, the divergent yerA-yopE operon was cloned into pBADmycHis A. The expression of yopE and yerA is under control of their native promoter and co-regulated with the T3S machinery. The different yopE variants were cloned similarly. The sequences of all the constructs were systematically checked (Eurofins MWG Operon). The pET-yscN plasmid was generated by GenScript Corporation. Escherichia coli strains were grown in Luria-Bertani broth (LB) or on Luria agar plates at 37°C. Y. pseudotuberculosis strains were grown at 26°C in LB or on Luria agar plates. Antibiotics were added to the medium for selection according to the resistance markers carried by the plasmid. The following concentrations were used: kanamycin, 50 g/ml and carbenicillin, 100 g/ml.
Yop Secretion Analysis-To induce Yops secretion, Yersinia pseudotuberculosis strains were first grown at 26°C for 2 h in Ca 2ϩ -depleted LB medium (medium containing 5 mM EGTA and 20 mM MgCl 2 ) and shifted at 37°C for 3 h. Cultures were started at an A 600 of 0.1. Samples were treated as described previously (26) and separated on Tris-Tricine polyacrylamid gels. Proteins were either stained with Coomassie R250 or, alternatively, transferred onto a PVDF membrane for immunoblotting. Anti-Yop, anti-YscU, anti-YscP, anti-DnaJ, and anti-YscF antibodies were diluted 1:5,000. Horseradish peroxidaseconjugated anti-rabbit antibody was diluted at 1:10,000 (GE Healthcare). Proteins were detected with a chemiluminescence detection kit (GE Healthcare). Quantification by densitometry was made using Multi Gauge software (Fuji film). The bands to be quantified were selected and quantitated after background subtraction.
HeLa Cell Cytotoxicity Assay-Yersinia cultures were started at an A 600 of 0.1 in LB medium containing 1 mM Ca 2ϩ . After 1 h of growth at 26°C, cultures were shifted at 37°C for 2 h. HeLa cells were infected for 45 min at multiplicity of infection of 10 and 20. The cytotoxicity was assayed as previously described by Rosqvist and coworkers with some modifications (3). For immuno-staining samples were subsequently fixed with 4% PFA and permeabilized with 0.5% Triton X-100 (Sigma Aldrich). Nonspecific binding were prevented by treatment with PBS, 0.1 M glycine, and PBS, 1% bovine serum albumin. Alexa Fluor 488 phalloidin (Life Technologies) and DAPI were used to, respectively, stain cells actin cytoskeleton and nucleic acids. Samples were mounted in mounting medium (Dako) and examined with a fluorescence microscope (Nikon Eclipse C1 plus).
Co-purification Assay-For YscN/YopE interaction analysis, the plasmid encoding His 6 -YscN was introduced into BL21(DE3) strain. Cells were grown at 37°C. At A 600 ϭ 0.7 protein production was induced by addition of 1 mM IPTG for 2 h. Cells were pelleted 10 min at 6,000 ϫ g. Cell pellets were resuspended in PBS, 1% Triton X-100, and sonicated on ice. Broken cells were centrifuged 20 min at 20,000 ϫ g at 4°C, and the supernatant was incubated with HIS-Select Nickel Affinity Gel (Sigma Aldrich) for 1 h at 4°C. Resin was washed, resuspended in PBS, 0.1% Triton X-100, and used in an interaction assay. Yersinia strains bearing plasmids encoding different YopE constructs were grown in conditions that allow induction of the T3SS. Cells were lysed as described above, and supernatants were incubated with gel-bound YscN for interaction. After 2 h of gentle rocking at 4°C, samples were washed with PBS, 0.1% Triton X-100. Proteins that remained bound to the gel were eluted using SDS-PAGE sample buffer. Sample were separated on SDS-PAGE and transferred onto PVDF membrane for immuno-detection. Similar protocol was used for GST-YscU CC /YscN interaction with some modifications. GST constructs were bound to glutathione-Sepharose resin (GE Healthcare), and His6-YscN was in solution for the interaction assay.
GST-YscU CC Is Secreted and Interferes with Yops Secretion-
Earlier results have shown that YscU CC is secreted by the T3SS (26), and we were therefore interested to localize the minimal T3S signal within YscU CC . A classical approach to identify and characterize secretion signals is to generate hybrid proteins with reporter proteins such as GFP, adenylate cyclase, or GST (glutathione S-transferase) and follow the secretion pattern of these hybrids. We therefore generated GST-tagged YscU CC variants and introduced the plasmids expressing these constructs into the Yersinia pseudotuberculosis strain, YPIII/pIB29 (MEK). To induce the T3SS, bacteria were grown in Ca 2ϩ -depleted medium and shifted from 26°C to 37°C. YscU CC -GST (Ucc-GST), in contrast to GST and GST-YscU CC (GST-U CC ), could not be detected in the cell lysates of both Yersinia and E. coli Bl21 (DE3) strains after induction. This hybrid protein was therefore not further studied. The level of Yops detected in the supernatant of the strain expressing gst-U CC was around 70% lower than the level observed for the strain expressing only gst (Fig. 1A, top panel compare lanes 1 and 2). Next we tested whether GST-U CC was secreted by the T3SS. For this, anti-GST antibodies were used to detect GST in the supernatants and pellets from strains expressing gst and gst-U CC . Two bands were detected in the supernatant of the strain expressing gst-U CC (Fig. 1A, middle panel). The high molecular weight band (36 kDa) corresponds to GST-U CC while the lower band (25 kDa) corresponds to GST, most likely generated by degradation of the hybrid protein. No band was detected in the supernatant of the strain expressing gst only (Fig. 1A, middle panel). These results show that YscU CC fused to the C terminus of GST promotes secretion of the GST-U CC hybrid protein. Furthermore, GST-U CC was not detected in the supernatant when the bacteria were grown in medium supplemented with Ca 2ϩ (data not shown), demonstrating that GST-U CC secretion is dependent of the T3SS. Similar amounts of GST and GST-U CC were detected in the cell pellets showing that the absence of GST in the supernatant was not linked to expression defect or protein instability (Fig. 1A, lower panel).
YscU CC Contains a C-terminal Secretion Signal-The data presented above indicated that unlike Yops, YscU CC does not need to be located at the N terminus to promote secretion of GST through the T3SS. In contrast, our results suggested that YscU CC harbors a C-terminal T3S signal. To define this putative C-terminal secretion signal, truncated variants of GST-U CC were generated by deleting the three and six last residues of YscU CC (GST-U CC⌬3 and GST-U CC⌬6 ). GST-U CC⌬3 and GST-U CC⌬6 also interfered with Yops secretion but at a lower extent than GST-U CC (Fig. 1A, upper panel). Interference in Yop secretion by GST-YscU C has been observed before by Riordan and Schneewind (30). The authors proposed that interaction sites within YscU C mediate GST-YscU C interaction with some components of the machinery (e.g. YscL, YscQ, and YscK), which may interfere in Yop secretion. It is possible that GST-YscU CC , GST-U CC⌬3 , and GST-U CC⌬6 interfere in Yop secretion in a similar manner. However, YscU is a critical secretion regulator; it is then most plausible that the interference in Yop secretion is due to increased level of YscU C that may hamper the effectors secretion regulation. Nevertheless, neither GST-U CC⌬3 nor GST-U CC⌬6 was detected in the culture supernatant indicating that those two construct are not secreted by the T3SS (Fig. 1A, middle panel). Similar amounts of GST and GST-U CC proteins were detected in the cell pellets demonstrating that GST-U CC⌬3 and GST-U CC⌬6 are produced and stable (Fig. 1A, lower panel). Altogether, these data show that the absolute C terminus of YscU CC harbors a T3S signal. Culture supernatants were TCA precipitated, separated on Tris/Tricine gel, and stained with Coomassie R-250 (upper panels). TCA-precipitated supernatants (2nd and 4th panels from the top) and cell pellets (3rd and 5th panels from the top) were analyzed by Western blot. Anti-GST antibody was used to detect the different GST-YscU variants (2nd and 3rd panels from the top). White asterisks indicate degradation products of the different GST constructs. Antibody directed against the cytoplasmic protein DnaJ was used to verify the absence of cell lysis (4th and 5th panels from the top). DnaJ was found only in the cell pellets demonstrating the secretion specificity of the GST-YscU variants. A, yscU CC was cloned into pGex-6p3 (GE Healthcare) plasmid to generate a GST-U CC hybrid protein. GST-U CC⌬3 and GST-U CC⌬6 , respectively, were obtained by deletion of the last 3 and 6 residues of YscU CC . The empty plasmid that expressed GST alone was used as control. The amount of YopD for the strain expressing GST was set at 100% for densitometry quantification. Secreted YopD was evaluated at 34 and 65%, respectively, for the strain expressing Gst-U CC and GST-U CC⌬3 . B, GST and GST-YscU variants were cloned into a low copy plasmid pBAD-HisA (Invitrogen) under an arabinose inducible promoter. GST-U 34 , GST-U 28 , and GST-U 15 respectively correspond to GST fused to the last 34, 28, and 15 residues of YscU CC .
To further explore the C-terminal T3S signal of YscU CC , GST-U CC constructs with large deletions in the N-terminal region of YscU CC were generated. According to the atomic structure of YscU CC residues Leu 321 to Ile 326 form a random coil that is followed by an ␣-helix ranging from Pro 327 to Arg 339 ( Fig. 2A) (32). The structural organization of the last 15 residues (Trp 340 to Leu 354 ) remains unknown ( Fig. 2A). Based on these structural features three variants were produced: GST-U CC34 , GST-U CC28 , and GST-U CC15 ( Fig. 2A). GST-U CC34 contains the 34 last residues that form a random coil and the ␣-helix. In the GST-U CC28 variant, the random coil is no longer present. GST-U CC15 contains only the last 15 residues of YscU CC . These three variants were introduced in Y. pseudotuberculosis YPIII/ pIB29(MEK) strain and probed for secretion. Immuno-detection of GST in the cell pellets showed that these constructs were stable and expressed at equivalent levels (Fig. 1B, lower panel). While GST remained cytoplasmic, GST-U CC34 , GST-U CC28 , and GST-U CC15 were detected in the supernatant showing that those three variants were secreted via the T3SS (Fig. 1B, middle panel). These data show that YscU CC harbors a C-terminal T3S signal and that the 15 last residues are sufficient to promote secretion of GST. After cleavage at the NPTH motif, YscU CC forms a stable complex with YscU CN . The crystal structure of the YscU CN /YscU CC complex showed that YscU CC N-terminal residues are fully or partially buried within the structure while the C-terminal residues are most likely exposed and unstructured ( Fig. 2A) (32). These structural features of YscU CC make the N-terminal residues not accessible for interaction with the ATPase YscN in order to be secreted. On the other hand, the C-terminal residues of YscU CC, may constitute an alternative to overcome this structural situation.
C-terminal Targeting of YopE to the T3SS-It is known that most of the late T3S substrates interact with specific T3S-chaperones (35). T3S-chaperones bind to N-terminal region of the effectors (residues 25 to 100) to maintain the effectors in a partially unfolded state that facilitate interaction with the T3Smachinery and subsequent secretion. Here we identified and characterized within YscU CC , the first C-terminal T3S signal. It is plausible that YscU CC is an exception to the N-terminal location of the T3S signal. However, it is an opportunity to study the separation of the secretion signal from the chaperone binding site by putting the last 15 residues of YscU CC (U 15 ) at the C terminus of a classical effector lacking its native secretion signal. For this, we used YopE, which is a well characterized substrate of Yersinia T3SS. YopE harbors a "classical" N-terminal secretion signal corresponding to the 15 first residues and the deletion of this sequence blocks YopE secretion (15,17,36). To test whether the last 15 residues of YscU CC (U 15 ) can promote secretion of YopE ⌬SP (lacking the 15 first residues) we construct different variants of YopE and introduced the corresponding plasmids into the strain YPIII/pIB29 (MEK). As expected YopE was detected in the supernatant of the strain expressing the wild type YopE while no YopE was secreted by the strain producing YopE ⌬SP (Fig. 3A, lines 1 to 3). These data confirmed that the 15 first residues of YopE are critical for secretion. In the constructs U 15 -YopE ⌬SP and YopE ⌬SP -U 15 , the 15 last residues of YscU CC were fused to YopE ⌬SP N or C termini, respectively. None of these two constructs were secreted into the supernatant at levels that allowed their detection with Coomassie staining (Fig. 3A, lines 4 and 5). However, YopE ⌬SP -U 15 lines 1 to 4). These results showed that C-terminal U 15 can promote secretion of YopE ⌬SP albeit at a low level.
YscU CC Secretion Signal, Orientation Does Matter-To be secreted via the T3SS, proteins have to interact with component of the T3S-machinery, such as the ATPase YscN (24,25,37). It is known that protein-protein interactions depend on the spatial arrangement of the side chains from surface-exposed residues (38). Thus the location (N-or C-terminal) as well as the orientation of the signal peptide might affect the secretion efficiency. To test this hypothesis we assayed the secretion of both U 15rev -YopE ⌬SP and YopE ⌬SP -U 15rev . U 15rev corresponds to the peptide that contains the same residues as U 15 but in a reverse order. YopE ⌬SP -U 15rev was not secreted whereas U 15rev -YopE ⌬SP was detected in the supernatant after Coomassie Blue staining (Fig. 3A, lines 6 and 7). Thus, U 15 promotes secretion when placed at the N terminus but only when the residues are placed in a reverse orientation. These results indicated that the orientation of U 15 is important for secretion and most likely for interaction with the machinery. The level of U 15rev -YopE ⌬SP detected in the supernatant is significantly higher than YopE ⌬SP -U 15 secreted level but is only about 5% of the amount detected for wt YopE showing that the YopE native secretion signal is more efficient than U 15rev . YopE signal peptide contains only one charged residue (Lys 2 ) while six charged residues are present in U 15 (Glu 342 , Arg 343 , Glu 347 , Lys 348 , His 350 , and Glu 352 ). As demonstrated by a Kyte and Doolityle plot, the charged residues within U 15 make this peptide more hydrophilic than YopE signal peptide (Fig. 2B). These hydrodynamic discrepancies between U 15 and YopE native signal peptide might explain this difference is secretion level. Indeed, charged residues as well as hydrophobic pockets at the interaction surface are parameters that guide protein-protein interaction (38,39). It is probable that U 15rev has a lower affinity for the machinery components that recognize the secretion signal.
We also investigated the ability of the native secretion signal of YopE (denoted E 15 ) to promote secretion of YopE ⌬SP when placed at the C terminus. Surprisingly both YopE ⌬SP -E 15rev and YopE ⌬SP -E 15 were not detected in the culture supernatant (Fig. 3). Unlike U 15 , E 15 cannot promote C-terminal secretion and that regardless of the orientation, showing that U 15 has features that are not shared with classical secretion signal.
The capacity of U 15 and U 15rev to promote translocation into eukaryote cells was also evaluated. For this, we followed the development of the cytotoxicity phenotype of HeLa cells after infection with strain expressing different variants of YopE. Thirty minutes after infection a full cytotoxicity phenotype was observed with the strain expressing YopE (Fig. 4B) while no cytotoxicity was observed when HeLa cells were infected with strains carrying an empty plasmid or plasmids expressing YopE ⌬SP or YopE ⌬SP -U 15 (Fig. 4, A, C, and D). On the other hand U 15rev -YopE ⌬SP provokes cytotoxicity on around 30% of the HeLa cells (Fig. 4E). The cytotoxicity phenotypes of YopE ⌬SP -U 15 and U 15rev -YopE ⌬SP correlate with their secretion efficiency.
YscU C Interacts with the ATPase YscN-It was recently suggested that the ATPase YscN interacts with the secretion signal of YopR, and that this interaction is critical for YopR secretion by the T3SS (24). Previous studies showed that surface-exposed FIGURE 3. Secretion of different YopE/YscU hybrid variants. Secretion of the different YopE/YscU hybrid proteins was assayed by introducing into YPIII/ pIB29MEK strain, plasmids expressing the corresponding constructs. All constructs were cloned into pBAD-HisA plasmid (Invitrogen) under an arabinose inducible promoter and expressed in trans. YopE corresponds to the wild type protein, and YopE ⌬SP bears a deletion of the native signal peptide. The last 15 residues of YscU (U 15 ) were fused to the N and C termini of YopE ⌬SP to generate U 15 -YopE ⌬SP and YopE ⌬SP -U 15 , respectively. Similarly, the reverse sequence of U 15 (U 15rev ) was fused to YopE ⌬SP to generate U 15rev -YopE ⌬SP and YopE ⌬SP -U 15rev , respectively. The signal peptide of YopE, E 15 and its reverse sequence E 15rev were fused to the C terminus of YopE ⌬SP to generate YopE ⌬SP -E 15 and YopE ⌬SP -E 15rev , respectively. A, culture supernatants were TCA precipitated, separated on Tris/ Tricine gel, and stained with Coomassie R-250. For the strain expressing U 15rev -YopE ⌬SP , the presence of this protein in the supernatant was confirmed by immuno-detection with anti-YopE antibody. To avoid overloading unprecipitated supernatant was used (presented on the right side). The amount of secreted wild type YopE was set at 100% for densitometry quantification and the amount of secreted U 15rev -YopE ⌬SP was evaluated at 3.2%. TCA-precipitated supernatants (B) and cell pellets (C) were analyzed by immuno-detection with anti-YopE antibody. The supernatants from the strains expressing YopE and U 15rev -YopE ⌬SP were not analyzed on this gel due to the elevated amount of these proteins in the supernatant. D, TCA-precipitated supernatants (upper panel) and cell pellets (lower panel) were analyzed by immuno-detection with anti-DnaJ antibody. The cytoplasmic protein DnaJ was used to verify the absence of cell lysis. DnaJ was found only in the cell pellets demonstrating the secretion specificity of the YopE/YscU hybrid proteins.
residues of some T3S-chaperones in complex with their cargo effector also interact with the ATPase (40 -42). The results presented above showed that U 15 and U 15rev , respectively, placed at the C-terminal or the N-terminal end of YopE ⌬SP promoted secretion, indicating that both U 15 and U 15rev interacted with YscN. To test this hypothesis, His 6 -YscN bound on a nickel-agarose resin was incubated with different variants of YopE to evaluate the capacity of these proteins to co-purify with YscN. YerA, the cognate chaperone of YopE, was co-expressed in all constructs to avoid degradation YopE and to maintain YopE chaperone binding domain in a partially unfolded state to facilitate interaction with YscN. YscN was eluted and co-purified YopE variants were detected using anti-YopE antibodies. Like YopR, YopE interacted with YscN, in a secretion signal-dependent manner (Fig. 5A). As expected, YopE ⌬SP did not co-purify with YscN. U 15rev -YopE ⌬SP , the variant with the highest secretion efficiency after wild type YopE, co-purified with YscN (Fig. 5B). This interaction is strictly dependent of the N-terminal U 15rev peptide since YopE ⌬SP and YopE ⌬SP -U 15rev were not co-purified with YscN (Fig. 5B). After YopE, U 15rev -YopE ⌬SP is the variant that co-purified best with YscN (around 60% of YopE amount) correlating with both secretion efficiency and translocation into HeLa cells. Importantly although U 15 -YopE ⌬SP was not secreted, this variant interacted with YscN albeit at a lower level compared with U 15rev -YopE ⌬SP (Fig. 5B). Surprisingly, we were unable to detect interaction between YopE ⌬SP -U 15 and YscN (Fig. 5B), although YopE ⌬SP -U 15 was secreted via the T3SS. However, the YopE ⌬SP -U 15 level of secretion was relatively lower than YopE level; and it is possible that this low level of secretion is a consequence of a low affinity of YscN for YopE ⌬SP -U 15 hybrid protein. Thus, interaction between the secretion signal and the ATPase YscN is critical to allow secretion; however our results suggest that interaction per se may not be sufficient to promote secretion. In line with our findings, we could also show an interaction between YscU C and YscN (Fig. 5). A similar result has previously been described for Spa40 and Spa47 that are the homologues of YscU and YscN in Shigella flexneri (43).
Deletion of the C Terminus of YscU Affects YscF Secretion-It
has been shown earlier that YscU auto-proteolysis followed by YscU CC dissociation constitute critical steps for regulation of the substrate specificity switch (14,29,30). In addition, we recently showed that YscU CC is secreted by the T3SS when bacteria are grown in conditions allowing Yop secretion (26). These previous published studies with the data presented above in the present article, suggested that the C-terminal end of YscU CC would be essential for Yops secretion. Surprisingly, when YscU ⌬3 , YscU ⌬6 , and YscU ⌬9 , which correspond to YscU constructs having C-terminal deletions, were introduced into a ⌬yscU strain (YPIII/pIB75), Yop secretion was not affected (Fig. 6A). Similarly, these mutants induced a cytotoxic response after infection of HeLa cells that was indistinguishable from the corresponding yscU wt strain (Fig. 6B). Thus, neither Yop secretion into the culture supernatant nor Yop translocation into HeLa cells was affected by the deletion of the last 9 residues of YscU. The fact that these mutants showed secretion and translocation of Yops at wild-type level strongly suggests that both auto-proteolysis and dissociation of YscU CC were not affected by the C-terminal deletions. As expected, secretion of YscU CC however, was impaired by YscU C-terminal deletions (Fig. 7A, second panel from the top) suggesting that YscU CC secretion is not required to trigger dissociation, but it is likely that YscU CC secretion is a result of dissociation. Interestingly, while similar amounts of intracellular YscF (needle subunit) were detected for the different strains, we observed an increase of YscF secretion for the C-terminal deletions mutants (Fig. 7A, bottom and third panel from the top). These results suggested that the C-terminal signal sequence of YscU is involved in YscF secretion control. Elongated needles have been reported for ⌬yscP mutants and yscU point mutants defective in auto-proteolysis (11,44,45). However, in contrast to mutants carrying C-terminal deletion of YscU, Yop secretion is impaired in a yscP mutant as well as in the yscU processing mutants. Altogether, the different phenotypes observed for mutants expressing YscU ⌬3 , YscU ⌬6 and YscU ⌬9 constructs suggest that the C-terminal end of YscU CC is directly involved in the secretion regulation of the needle subunit, YscF. Previously, it has been shown in Y. enterocolitica that strains bearing the processing mutants YscU N263A and YscU P264A secrete elevated amount of YscF as a consequence of reduced secretion of YscP (12,45). Interestingly, the secretion of YscP was not affected in any of the YscU mutants with C-terminal deletions (Fig. 7B).
Discussion
We recently published results showing that YscU CC , the 10-kDa C-terminal polypeptide generated by YscU auto-proteolysis, is secreted via the T3SS of Y. pseudotuberculosis (26). A, His 6 -YscN bound on a His-Select resin was incubated with lysats containing different variants of YopE, co-expressed with the YopE chaperone YerA. After several washes, proteins associated to the resin were eluted and separated on the gel for an immunoblotting with anti-YopE antibody (upper panel). Level and stability of the proteins in the lysates were analyzed by immunoblotting with anti-YopE antibody (lower panel). As can be seen the wt signal sequence of YopE is essential for interaction with YscN. Both U 15 and U 15rev interacted with YscN when placed at the N-terminal end of YopE ⌬SP . NB: No interaction could be noticed when U 15 was placed in the C-terminal end of YopE ⌬SP . For description of the different constructs used, see Fig. 3 legend. The amount of co-purified wild type YopE was set at 100% for densitometry quantification, and the amount of U 15rev -YopE ⌬SP was evaluated at 62.8%. B and C, GST and GST-YscU C constructs immobilized on glutathione-agarose columns were incubated with purified His 6 -YscN. After several washes, proteins associated to the resin were eluted and separated on the gel for an immuno-detection with anti-YscN antibody. The upper panels show the Coomassie staining of the GST constructs bound onto the glutathione-Sepharose (B) YscN interacts associated with GST-YscU C but not with GST alone. The binding affinity of YscN to YscU C decrease of about 50% after deletion of the 3 and 6 last residues of YscU C . YscN barely binds to GST-YscU C⌬9 . C, a similar experiment using large N-terminal deletions within YscU CC showed that the last 28 residues of YscU encompass the binding site for YscN. Furthermore, the physiological conditions that allow YscU CC secretion are identical to the conditions required for Yop-proteins secretion, i.e. after incubation of the bacteria at 37°C in Ca 2ϩ -depleted medium. These results indicated that YscU CC putatively harbored a specific T3S signal and we have here addressed this question. First, YscU CC (U CC ) was cloned at the C-terminal end of the GST protein reporter and the resulting hybrid protein, GST-U CC was probed for secretion. Remarkably, GST-U CC was found to be secreted via the T3SS in a Ca 2ϩdependent manner. These results demonstrated that C-terminal localized U CC can promote secretion of reporter a protein like GST via the T3SS. C-terminal targeting of T3S-substrates (natives substrates or hybrid proteins) to the machinery has never been reported. In fact, studies of several T3SS effector and translocator proteins from different species demonstrated that the secretion signal is localized at the N terminus (1,17,24). To further investigate YscU CC secretion signal, GST-U CC truncated variants with deletions at the N or C-terminal regions of YscU CC were probed for secretion. The secretion profiles obtained for these different variants showed that YscU secretion signal is localized within the last 15 amino acids of the protein (U 15 ). Moreover, when YopE ⌬SP , the T3S-substrate lacking its native secretion signal was fused to U 15 , the resulting hybrid protein YopE ⌬SP -U 15 was also found to be secreted by T3SS. Thus, in addition to the well-studied T3S-associated "classical" N-terminal secretion signals, we identified a C-terminal secretion signal that mediates secretion of both endogenous and exogenous proteins. The ability of T3SS to export proteins directly from the bacterial cytoplasm into the extracellular medium or into the host cell cytosol has been exploited for heterologous protein production or antigen delivery during vaccination (46,47). In this context, the newly identified C-terminal T3S signal may have interesting biotechnological applications for production of proteins with N-terminal region that cannot be modified without affecting the protein functionality. However, further investigations will be necessary to increase the secretion efficiency via the C-terminal T3S signal.
Earlier work has shown that T3S-substrates must be unfolded to allow secretion (48,49). Structural evidences showed that the chaperone binding domains within effectors (100 first residues) are naturally unfolded and a disorder to order transition occurs upon binding to the cognate chaperone, stabilizing the secondary structure prior secretion (50). On the other hand, the other regions of the effectors are actually folded and are not affected by the chaperone binding. Based on these observations it has been proposed that an unfoldase is involved in substrate secretion through the T3SS (49). Hsp100 unfoldases belong to AAAϩ ATPases (ATPases Associated with various cellular activities) family of protein that catalyze protein unfolding and disaggregation (51). These proteins form ring shaped oligomers (hexamers) with a central pore through which the unfolded substrates are threaded. The T3S-unfoldase hypothesis was further supported when it was shown that ring shaped hexamers of InvC, the AAAϩ ATPase of Salmonella enterica binds to the effector/chaperone complexes and dissociates the complexes by unfolding the effectors in an ATP-dependent manner (25). Our results are consistent with the unfol-dase hypothesis since in all likelihood, the N-terminal GST part of the GST-U CC hybrid protein is folded prior to secretion.
Recognition of effectors secretion signal by the ATPase is the first step in the unfolding and secretion processes (24,25). As it has been previously demonstrated for the T3S-substrate YopR (24), we showed here for YopE and YscU CC that the secretion signal is required for interaction with the ATPase YscN (Fig. 5). The secretion profiles as well as the results from protein/protein interaction assays demonstrated that YscN interacts with YscU CC secretion signal regardless of the localization at the C terminus (GST-U CC and YopE ⌬SP -U 15 ) or at the N terminus (U 15rev -YopE ⌬SP ). However, to promote secretion after interaction with YscN, YscU CC secretion signal must be correctly oriented according to its localization. Indeed, the forward secretion signal U 15 , promoted C-terminal secretion while the reverse sequence U 15rev exclusively promoted N-terminal secretion. Although the levels of YopE ⌬SP -U 15 and U 15rev -YopE ⌬SP in the culture supernatants were considerably lower than the level of wild type YopE, the secretion of these hybrid proteins was specific and dependent of U 15 and U 15rev respectively. Hence, the orientation of the amino acids constituting YscU CC secretion signal is essential for secretion, which strongly argues for a polarized threading mechanism guided by the orientation of the T3S signal. In line with this idea, the hybrid protein U 15 -YopE ⌬SP was not secreted despite the interaction with the ATPase YscN.
Several studies have demonstrated the involvement of the YscU/FlhB family of protein in the substrate specificity switch (11,13,14,34). YscU is anchored in the inner membrane via four transmembrane segments. The membrane integrated part of the protein is connected to a large cytoplasmic domain via a linker sequence. Accumulated data indicate that the different portions of the protein have distinct functions (27). Recently, we showed that the linker sequence forms an ␣-helix due to interactions of the positively charged residues with the negatively charged lipids of the cytoplasmic membrane inner leaflet (52). This interaction and subsequently the ␣-helix formation are critical for YscU function and Yops secretion. Auto-proteolysis at the conserved NPTH motif is also critical for YscU function since mutants defective in auto-proteolysis present altered secretion of YscF and Yops (29 -31, 45). Previously we showed that Ca 2ϩ -depleted conditions trigger dissociation and secretion of YscU CC , which is critical for the substrate specificity switch (26). We describe here yet a novel phenotype associated to YscU. Similarly to yscP mutants (11,44), deletion mutants within the YscU C-terminal secretion signal secrete elevated levels of YscF, the needle subunit, which indicates that the secretion signal has a negative influence on YscF secretion. These results are in line with earlier findings showing that the 33 last amino acids of FlhB are redundant for hook-basal body assembly. Interestingly this domain of FlhB has an inhibitory effect on the substrate specificity switch suggesting that the C-terminal region of FlhB is involved in the temporal regulation of the flagellum assembly (33,34). In fact it cannot be excluded that YscU via its C-terminal secretion signal possesses a negative activity similarly to FlhB, that affects the switch from secretion of early to late substrates. However, unlike yscP mutants, strains with YscU C-terminal deletions were still able to secrete Yop-proteins at wild-type level, suggesting that the C-terminal end of YscU CC is rather involved in the needle formation than the substrate specificity switch. Needle length control and substrate specificity switch are two closely related events that implicate both YscU and YscP (11,29,31). Indeed, it has been shown that deletion of YscP as well as auto-proteolysis mutants of YscU secrete elevated amount of YscF (44,45,53). It has been proposed that these mutants form long needles because of their inability to make the substrate specificity switch and secrete effectors. Thus, needle length control and substrate specificity switch were considered as interdependent processes. We speculate that the processing and the dissociation of YscU CC are implicated in the substrate specificity switch while YscU secretion signal together with YscP are involved in needle control. More work is however necessary to investigate the YscU/YscP interaction and how this interaction affects the needle formation. | 9,237.2 | 2015-09-03T00:00:00.000 | [
"Biology"
] |
Telemedicine and its Impact on the Preoperative Period A Systematic Review of the Literature
—The application of telemedicine has aroused a lot of interest in the field of chronic disease care, which is associated with clinical medicine. The aim of this research is to systematically evaluate the published evidence on telemedicine in the preoperative period. A systematic search was conducted over the last five years, excluding secondary research. Selection criteria were applied, obtaining 68 articles that met these criteria and quality criteria. The results show that the largest production is carried out in the United States and the United Kingdom, with collaboration between institutions and countries. The main use of telemedicine was in teleconsultation and telecounseling activities. In addition, the application of telemedicine in the preoperative period was made to a greater extent for general procedures without distinction of surgical specialty, oncological surgery and traumatology. An increased production observed can be related to the need for physical distancing due to the pandemic. Future research could include the co-occurrence of search terms, the impact of smartphones, NER terms, and the impact of polarity and objectivity on readers' choice of articles to read, share, and cite.
I. INTRODUCTION
The application of telemedicine has sparked interest in chronic disease care settings, which are generally encompassed in clinical disciplines.The application of telemedicine in surgical practice has become relevant in terms of the use of telesurgery.Reviews have been found about the use of applications in the prediction of mortality associated with surgery and the link with decision-making [1].In this context, the question arises about the impact of telemedicine in the preoperative period.
There are a variety of experiences in the application of telemedicine in diverse health settings [2] [3], which, despite the advantages regarding access to care, have also pointed out limitations such as difficulty in telephone access, omitted or erroneous information during data collection, and delay in the reporting of cases under investigation [4].
As Bokolo [5] points out, telemedicine and telehealth refer to the use of information and communication technologies embedded in software programs with high-speed telecommunications systems for the provision, management, and monitoring of health services.
In Peru, the definition of telemedicine is contained in the modifications of Telehealth Framework Law [6], as: "The provision of remote health services in the components of promotion, prevention, diagnosis, treatment, recovery, rehabilitation and palliative care, provided by health personnel using ICTs, with the purpose of facilitating access to health services for the population".
Telemedicine could also be used to refer to the use of telecommunications for the remote provision of health services.
The World Health Organization (WHO) points out that telemedicine includes both diagnosis and treatment, as well as medical education, and that it is a technological resource that makes it possible to optimize health care services, saving time and money and facilitating access to distant areas for specialist care.In the context of the health crisis resulting from the pandemic, its use has become relevant [7].
Its applications include clinical practice and health education.Within clinical practice there are the following forms: Telediagnosis, Teleconsultation, Remote monitoring, medical meetings to obtain second opinions (Teleconference), Digital storage of data or medical records.Within the educational area, distance classes from medical centers (elearning through videoconferencing) stand out [2] [4] [8].
Aspects related to the construction of bibliometric networks, polarity, objectivity, and subjectivity of the scientific production of telemedicine in the preoperative period have not been pointed out in the reviews on this topic.
In the current context, surgical activities have been suspended in most institutions around the world and considering the advantages of telemedicine in terms of timeliness of care [5], the aim is to investigate its usefulness in the field of preoperative activity.This systematic review aims to explore the state of the art of telemedicine in the field of preoperative care.
In this vein, Section II covers the background and related works, where similar characteristics to the proposals of this work are specified.Section III is revision method which, details the methodology used in this document.Section IV delves into results and discussion which shows the compilation of studies and the data they generated, which are shown by graphs and tables to determine observations.To conclude, Section V, conclusions and future research, presents the recommendations reached because of the analysis of the www.ijacsa.thesai.orginformation obtained, as well as suggestions for scientific production on the subject addressed.
II. BACKGROUND AND RELATED WORK
There are systematic reviews related to the application of telemedicine in the preoperative setting.
Research places telemedicine as a developing technology, and in the field of surgical practice, Sohn et al. [9] point to its use in plastic surgery and otolaryngology.In their review of telemedicine in the field of dermatological surgery, they point out that its application in preoperative consultation allows the planning of the intervention and increases access to care.Bokolo et al., in its systematic review on the application of telemedicine and e-health technology in clinical services in response to the COVID-19 pandemic, points out the importance of the use of information and communication technologies (ICT) integrated with telecommunication software and systems for care, management and monitoring in patient care [5].
Asiri et al., on the other hand, in their review of the use of telemedicine in surgical care found that, for the most part, patients treated with this technology reported time savings and a reduction in the number of lost workdays as benefits [8].
However, not all reviews pointed to positive aspects.Moentmann et al., in their review on telemedicine in otolaryngology, noted that a negative aspect was the limitation of patient contact, although video-otoscopy is the most widely supported telemedical intervention limiting physical contact between otolaryngologists and their patients [9].Kim et al. conducted a systematic review of research addressing the use of technology to intervene preoperatively on surgery anxiety in pediatric patients and their parents or guardians.They noted that the available literature is extremely heterogeneous and limits the ability to draw definitive conclusions about the effectiveness of technology-based interventions.In addition, the results showed that for this group of patients, tablets and manually operated devices with interactive capability may represent a viable option to address preoperative anxiety.However, they were unable to extrapolate these results to adults, with whom they had better results using videos [10].
More encouraging results are found with the reviews by Kolcun et al. [11] and Lu et al. [12].In the first case, it highlights that the increase in the use of telemedicine has been favored by the crisis caused by the pandemic and represents an opportunity to continue developing this technology and validate its use in new fields.Their initial results show that this technology becomes a support for the interaction between doctors and patients during the need for social distancing, showing its usefulness for aspects that do not involve the need for physical contact.Aspects related to this point are indicated as, certain perioperative tasks (complementary patient education and postoperative surveys).
For Lu et al., in their review of the use of Short Message Service (SMS) and smartphones in surgical care, they conclude that applications of this type offer a sophisticated yet simple tool to improve perioperative health care, in addition to the need for a regulatory framework for communications [12].
Telemedicine is attracting attention in the healthcare sector, due to the diversity of interaction modalities that have been developed over the last decade, and which are becoming increasingly affordable for both patients and doctors.At this point, Shanbehzadeh et al. [13] highlight short message service, email and web portals, secure phone calls or VOIP, video calls, interactive mobile health applications (m-Health), remote patient monitoring, and video conferencing.At the same time, it points out that the synchronous modality through common social networks was the one that presented the highest percentage of use for clinical care.While data exchange activities using the store and forward service via secure messaging technology and pre-recorded media files had the least popularity.
III. REVISION METHOD
The method used in this research is the systematic literature review (RSL), which is defined as a process of identifying, analysing and interpreting the existing scientific evidence on a topic, with the aim of providing answers to specific research questions.
The methodology used to develop the RSL in this paper is based on the document proposed by Kitchenham [14], who divides the whole process into three general parts: the planning of the review, the development, and the publication of results.This research followed the phases defined by Kitchenham, as well as the activities that compose them.In the first phase, the research questions are specified, and the review protocol is developed, which is necessary to reduce the possibility of bias.In the second phase, the studies to be included in the research are identified, as well as the evaluation of their quality.Finally, in the third phase, the results obtained are detailed (see Fig. 1).Translation of the systematic literature review process proposed by kitchenham.www.ijacsa.thesai.org
A. Problems and Objectives
When a systematic review of the literature is conducted, research questions are defined, which help in the extraction and analysis of data to meet the objectives of the research.
For this research, one general question and eight specific questions were posed.
The general question was:
What is the state of the art of Telemedicine and its impact on the Preoperative Period?
The objective of this study was to determine the current state of knowledge of the application of telemedicine in the preoperative period and to know the impact that this intervention generates in this period.
The specific questions and their objectives are shown in Table I.To determine which articles whose abstracts are characterized by their high objectivity by year and country in telemedicine research and its impact on the preoperative period
B. Search Sources and Search Strategy
For this work, a bibliographic search was carried out using the most well-known search engines (see Table II).The table shows the search engines that were used to locate the research papers related to the topic of telemedicine and the preoperative period.
To determine the search terms, two well-known thesauri were used, the DeSC/MeSH for the terms related to telemedicine and the preoperative period, and the IEEE Xplore thesaurus also for the term telemedicine, and for the term methodology (see Table III).The table shows the search engines that were used to locate the research papers related to the topic of telemedicine and the preoperative period.
The general equation was determined using the dependent, independent and intervening variables (see Fig. 2).Equations based on the general equation were determined for each searcher (see Table IV).
C. Identified Studies
The search yielded a total of 10,741 articles (see Fig. 3), to which filters related to the temporality of publication were applied, accessing articles from the last five years, as well as segregation by language, selecting those that were in English or Spanish.Subsequently, articles published in scientific journals and those peer-reviewed, as well as documents that were not duplicates, were selected.www.ijacsa.thesai.org("All Metadata":telemedicine OR "All Metadata":"digital health" OR "All Metadata":"digital healthcare" OR "All Metadata":e-health OR "All Metadata":m-health OR "All Metadata":"electronic health" OR "All Metadata":"mobile health") AND ("All Metadata":"preoperative period" OR "All Metadata":"preoperative care") AND ("All Metadata":methodology OR "All Metadata":method OR "All Metadata":model) Scopus (telemedicine OR "digital health" OR "digital healthcare" OR e-health OR m-health OR "electronic health" OR "mobile health") AND ("preoperative period" OR "preoperative care") AND ( methodology OR method OR model ) ARDI (telemedicine OR "digital health" OR "digital healthcare" OR e-health OR m-health OR "electronic health" OR "mobile health") AND ("preoperative period" OR "preoperative care") AND (method OR methodology OR model) ProQuest (telemedicine OR "digital health" OR "digital healthcare" OR e-health OR m-health OR "electronic health" OR "mobile health") AND ("preoperative period" OR "preoperative care") AND (methodology OR method OR model) ScienceDirect (telemedicine OR "digital health" OR "digital healthcare" OR e-health OR m-health OR "electronic health" OR "mobile health") AND ("preoperative period" OR "preoperative care") AND (methodology OR method OR model)
Wiley
Online Library ""telemedicine" OR "digital+health" OR "digital healthcare" OR "e-health" OR "m-health" OR "electronic+health" OR "mobile health"" anywhere and ""preoperative period" OR "preoperative care"" anywhere and ""method" OR "methodology" OR "modeling"" anywhere Microsoft Academic (telemedicine OR "digital health" OR "digital healthcare" OR e-health OR m-health OR "electronic health" OR "mobile health") AND ("preoperative period" OR "preoperative care") AND (methodology OR method OR model) Springer (telemedicine OR "digital health" OR "digital healthcare" OR e-health OR m-health OR "electronic health" OR "mobile health") AND ("preoperative period" OR "preoperative care") AND (methodology OR method OR model) Google Scholar (telemedicine OR "digital health" OR "digital healthcare" OR e-health OR m-health OR "electronic health" OR "mobile health") AND ("preoperative period" OR "preoperative care") AND (methodology OR method OR model) The table shows the search engines that were used to locate the research papers related to the topic of telemedicine and the preoperative period.
D. Exclusion Criteria
The following exclusion criteria were established for selecting articles: CE1: Articles are more than five years old.CE2: Articles are written in a language other than English or Spanish .
CE3: Articles followed peer review methodology and were not reported in a scientific journal.CE4: The article did not propose a telemedicine solution or did not mention method or technique.CE5: The article is not relevant to the objectives of the research.SC 6: The article is not available, or the full text of the article is not available.CE 7: The article is not unique.
E. Selection of Studies
Initially, 10 741 articles were obtained, to which exclusion criteria were applied for the filtering and selection of the most relevant articles that provide better answers to the research questions posed (see Fig. 4 to Fig. 5).
As a result of this stage, a total of 68 articles were included (see Table V).
F. Quality Assessment
To determine the final list of articles to be included in this research, criteria were applied to evaluate their quality.
Quality assessment criteria were determined for methodological characteristics and for substantive characteristics.
1) Methodological characteristics:
QA1: Are the objectives of the research clearly identified in the document?QA2: Are research results clearly identified and reported?
2) Substantive features:
QA3: Does the research consider elective surgeries?QA4: Is it possible to contact the principal investigator?www.ijacsa.thesai.org The full text for each document was analyzed and the criteria shown were applied to evaluate its quality and then conclude in maintaining the 68 articles.
G. Data Extraction Strategies
At this stage, the final list of articles was used, from which the necessary information was extracted to answer the research questions (RQ1 to RQ8).The web and desktop application, Zotero, was used to manage data extraction (see Fig. 6).
H. Synthesis of Findings or Synthesis of Data
The information extracted for the Research Questions (RQ1 to RQ8) was tabulated and presented as quantitative data, using Excel, to statistically compare the various findings for each Research Question.
Certain patterns of research were found, as well as research directions that were carried out during the last few years.
Zotero was used for data management, while VOSViewer and Onodo were used for the analysis of bibliometric networks.
To determine objectivity and polarity, the Python program with the TextBlob library and the open access program CoreNLP v.4.3.2 were used.www.ijacsa.thesai.org
A. Study Overview
Of the 68 articles included in the research, there has been a sustained increase in scientific production in the last two years (see Fig. 7).A variety of sources were searched, including those that are not common for health research publications.At this point, we compare the results with those obtained by Bokolo et al. [5], which we searched Google Scholar, PubMed, ScienceDirect, ProQuest, Springer, Sage, Taylor & Francis, IEEE Xplore, Wiley, ACM, Emerald, Inderscience, ISI Web of Science, and Scopus.The results are also compared with reviews of articles published in more well-known sources in the healthcare sector, such as the research conducted by Asiri et al., in its review on telemedicine in surgical care, in which MEDLINE, EMBASE, CINAHL and Science Direct were used to obtain articles [8].
Other reviews, such as that of Jonker et al., on e-health in the perioperative in older adults, included PubMed, EMBASE, CINAHL [77].On the other hand, the team of Moentmann et al., in its review on telemedicine in otolaryngology, searched Embase, PubMed, and Web of Science, [9].
The number of articles included in these reviews is similar, except for the review by Jonker et al., in which the number of articles included was lower due to the delimitation of search criteria for the target group (older adults) [15].
As for the number of authors, they amounted to 436 in the 68 articles included.The number of authors varied in terms of the number of authors, with an average of six authors per publication.No collaborative relationships were found between the different research groups (see Table VI and Fig. 8).
A point to consider is related to the words that are most repeated in the titles (see Fig. 9).The most frequent words were identified as the words "preoperative", "study", "surgery", "telemedicine", "patients", "COVID-19" and "mobile", which are related to the search terms used.It is evident that the words that are most frequently used in medical articles are related to the search terms used.Table VII and Fig. 10 show the most frequently used keywords.It was consistent with other systematic reviews that include these keywords in their publications [5] [16], while other reviews, such as "surgical procedure", "satisfaction" y "monitoring" [8].It has also been important to find co-occurrence between the keywords of the articles, such as "telemedicine", "COVID-19", "patient satisfaction", "preoperative care" and "prehabilitation", shown in Fig. 11.This result can provide guidance on the impact that the pandemic has had on the development of telemedicine research in the preoperative period, which in turn is related in the articles to patient satisfaction and better preparation for surgery [5].It should be noted at this point that the Named-entity recognition (NER) term search program in the titles of the articles, also identified the terms "COVID-19" and "COVID-19 Pandemic".2) RQ2: What is the relationship between the polarity of article titles and the frequency with which they are cited by other authors in telemedicine research and their impact on the preoperative period?
Although no systematic reviews have been found that explore this point in the field of telemedicine, it is considered important to analyze the impact of this variable on readers.
As a result of the analysis of the titles of the articles, it was determined that, in general, titles with neutral polarity were the most cited, followed by those with positive polarity (see Fig. 12).On the other hand, the articles identified through the Microsoft and IEEE Xplore search engines showed greater neutrality in their writing (see Fig. 13).
Regarding the number of citations related to polarity and the search source in which the article was found, the highest frequency of citations is related to neutral titles extracted from Scopus (see Fig. 14).www.ijacsa.thesai.org 3) RQ3: Which are the most productive institutions that establish collaborative networks in the development of telemedicine and its impact on the preoperative period?Both public and private healthcare institutions, as well as those dedicated to research (universities, research groups) collaborated in the scientific production of telemedicine in the preoperative period.Six institutions produced two or more research articles (see Table VIII).
Collaboration between institutions is visualized in Fig. 15.
Here we can see that the Technical University of Munich stands out.The articles that contributed the most to answering this question came from Scopus (see Table IX).Some systematic reviews [1] [15] have pointed out the importance of collaboration between institutions and have included related experiences (first level and specialized centers, research institutions, universities, and hospitals) in their reviews.
4) RQ4.
In which countries is telemedicine being applied most frequently in the preoperative period?
It is evident that publications related to telemedicine in the preoperative period have been carried out more frequently in the United States (49%) (see Table X and Fig. 16).This result is consistent with that described by M. Shanbehzadeh et al., in which the articles obtained were mostly (76.75%) carried out in this country [13].Regarding the establishment of collaboration networks with other countries, this research shows that the United States also leads this characteristic (see Fig. Most of the publications correspond to journal-type articles (see Fig. 18).This is consistent with other publications in the field of health [1] [5], in which the main input is publications of this type.It should be noted that some studies have only taken publications of this type as input, as in the research by Kolcun et al., which excludes publications such as "case reports", "technical reports" and "conference abstracts" [11].The main use of telemedicine in this period was in teleconsultation and telecounseling activities (see Fig. 19).
In this regard, the findings are consistent with the results of Asiri et al. [8], Kolcun et al. [11] and Shanbehzadeh et al., in which the majority use of telemedicine for teleconsultation and teleguidance activities was evidenced.Additionally, the use for telesurgery, tele-education and telemonitoring was reported [13].As for the modality used, it was mainly characterized by being asynchronous (43%), however, it does not differ greatly from the synchronous modality (38%).19% of publications use both modalities to provide telemedicine services (see Table XI and Fig. 20).These results do not differ greatly from other reviews, in which both modalities were used [5], preferring videoconferencing for aspects related to diagnostic assessment [11].
Asynchronous
The communication channels used by the researchers varied, according to the activity carried out, but the use of videoconferencing and mobile applications stands out (see Table XII).These results coincide with studies carried out at the first level of care, such as the one conducted by A. C. Shah and S. M. Badawy in 2020 [85].No percentages have been placed in this table, since in about half of the publications they refer to the use of more than one communication channel at the same time.
These results could be linked to the emergence of new technologies associated with videoconferencing equipment and the expansion of smartphones [8].Chen E.A. et al. [1], he clarifies this topic in his review "Smartphone applications in orthopedic surgery", mentions that the use of this equipment by physicians amounts to 90%, and performs a descriptive analysis of the use of mobile phones in the field of orthopedics, finding that their use in this field varied in capabilities from angular management to preoperative and gait quantification.And it concludes that as more advanced applications are developed, smartphones are likely to gain an increasing presence in both the operating room and clinical settings.
Something that we should also point out is that the articles included in this research point to interventions that used more than one communication channel (see Fig. 21).
7) RQ7
. What are the surgical specialties that are most frequently applying telemedicine in the preoperative period?Telemedicine investigations in the preoperative period were carried out to a greater extent without distinction of surgical specialty.The concentration of publications related to general preoperative management, oncological surgery, traumatology, general surgery, and neurosurgery is observed (see Fig. 22).www.ijacsa.thesai.orgThese results are in line with those published by Gachabayov et al., who addresses the issue of the role of telemedicine in surgical specialties during the pandemic and points out that most articles in the first six months were performed in orthopedic surgery followed by general surgery and neurosurgery, while in the second six months, urology and neurosurgery were the most productive, followed by transplantation and plastic surgery [86].We observed that the production of articles during the first years was lower than during the last two years, but in recent years there has also been an increase in subjectivity in the abstracts of publications (see Fig. 23).In terms of countries with highly objective summaries, the United States continues to lead (see Fig. 24).
V. CONCLUSIONS AND FUTURE RESEARCH
This document has been an input and provided a statistical analysis on the application of Telemedicine in the Preoperative Period, through the extraction of data from a total of 68 articles published between 2017 and 2021.The highest percentage of identified studies was obtained from Springer, however, when applying the filtering and exclusion criteria, the highest percentage of included studies came from Scopus.It should be noted that the greatest use of telemedicine in this period is concentrated in teleconsultation and telecounseling services, as well as a greater scientific production with aspects related to general preoperative procedures, followed by those applied to oncological surgery and traumatology.There has also been an increase in production in recent years, probably due to the need for physical distancing due to the pandemic and the demand for activities in the surgical field.
For future research, it would be opportune to consider the co-occurrence of search terms, in this case, telemedicine with COVID-19 and preoperative care.It would also be a great contribution to analyze the impact smartphones have on preoperative care.Another relevant aspect would be to point out the use of NQER terms and the impact of polarity and objectivity on readers' choice of articles to read, share and cite.
REFERENCES
[1] Chen, E. A., Ellahie, A. K., & Barsi, J. M. (2019).Smartphone applications in orthopaedic surgery: a review of the literature and application analysis: A review of the literature and application analysis.
Fig. 1 .
Fig. 1.Development phases of the Systematic Literature Review (RSL).Translation of the systematic literature review process proposed by kitchenham.
RQ2:
What is the relationship between the polarity of article titles and the frequency with which they are cited by other authors in telemedicine research and their impact on the preoperative period?To determine the relationship between the polarity of article titles and the frequency with which they are cited by other authors in telemedicine research and its impact on the preoperative period RQ3: What are the most productive institutions in the development of telemedicine and its impact on the preoperative period?To determine which institutions are the most productive in the development of telemedicine and its impact on the preoperative period RQ4.In which countries is telemedicine being applied most frequently in the preoperative period?Determine where telemedicine is most commonly applied in the preoperative period RQ5: Which means of publication are the main objectives for the production of research in the area of telemedicine in the preoperative period?To determine the main means of publication for the production of research in the area of telemedicine in the preoperative period RQ6.What are the types of telemedicine services that are most frequently are the Articles whose Abstracts are characterized by their high Objectivity by year and country in research on telemedicine and its impact on the preoperative period?
Fig. 3 .
Fig. 3. Number of studies identified by search source.
Fig. 4 .
Fig. 4. PRISMA flowchart, on the application of criteria for the selection of articles.
The data extracted from each article were: Article ID, Article Title, URL, Source, Year, Country, Number of Pages, Language, Type of Publication, Publication Name, Research Methodology, Author(s), Affiliation, Number of Citations, Abstract, Keywords, Conclusions/Discussions, Sample Size, RQ1, RQ2, RQ3, RQ4, RQ5, RQ6, RQ7, RQ8.Not all articles answered all research questions.
Fig. 5 .
Fig. 5.The figure shows the result of the application of the search formula using the IEEE xplore, scopus, ARDI, ProQuest, sciencedirect, ACM digital library, wiley online library, microsoft academic, springer and google scholar search engines.
Fig. 7 .
Fig. 7.The figure shows the distribution of scientific production by year and source.
Fig. 8 .
Fig. 8. First authors and co-authors who formed research teams with a larger number of members.
Fig. 9 .
Fig. 9. Word cloud of the titles of the articles included in the research.B. Answers to Research Questions 1) RQ1.What are the most used and relevant keywords by Number of Articles in telemedicine research and their impact on the preoperative period?
Fig. 10 .
Fig. 10.Word Cloud of the keywords of the articles included in the research.
Fig. 11 .
Fig. 11.Co-occurrance of keywords in the articles included in the study.
Figures
Figures found in other reviews vary.Jonker et al. shows 28% [15], while Kolcun et al. 41.66% [11], with the United States occupying the first place in scientific production.
Fig. 17 .
Fig. 17.Collaboration between countries on publications on telemedicine in the preoperative period.5) RQ5: Which means of publication are the main objectives to produce research in telemedicine in the preoperative period?
Fig. 19 .
Fig. 19.Types of service that are most frequently provided in the preoperative period.
Fig. 20 .
Fig. 20.Telemedicine modality most frequently used in the preoperative period.
Fig. 21 .
Fig. 21.Number of communication channels used in telemedicine activities during the preoperative period.
Fig. 22 .
Fig. 22.The use of telemedicine in the preoperative period focuses on general procedures, oncological surgery, and traumatology.
8 )
RQ8. Which are the Articles whose Abstracts are characterized by their high Objectivity by year and country in telemedicine research and its impact on the preoperative period?
TABLE I .
CORRESPONDENCE BETWEEN RESEARCH QUESTIONS AND OBJECTIVES Question Objective RQ1.What are the most used and most relevant keywords by Number of Articles in telemedicine research and their impact on the preoperative period?Determine which are the most used and most relevant keywords by Number of Articles in telemedicine research and in the preoperative period
TABLE III .
IDENTIFICATION OF SEARCH TERMS
TABLE IV .
EQUATIONS AND SEARCH SOURCES
TABLE V .
RESULT OF THE APPLICATION OF SELECTION CRITERIA Note: Although ACM Digital Library provided the largest number of articles, the most relevant articles were obtained from Scopus.
TABLE VI .
RESULT OF THE APPLICATION OF SELECTION CRITERIA
TABLE VII .
20 KEY WORDS MOST FREQUENTLY USED
TABLE XI .
TELEMEDICINE MODALITY USED
TABLE XII .
COMMUNICATION CHANNELS IN TELEMEDICINE MOST FREQUENTLY USED IN THE PREOPERATIVE PERIOD | 6,876.8 | 2024-01-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Control and monitoring of lipoprotein levels in atherosclerosis induced rabbits using novel nanoparticulate medication of Lovastatin and Rosuvastatin
Different epidemiological studies have demonstrated higher level of uric acid and Lipoprotein levels as risk factors for atherosclerosis. The present investigation deals with formulation of novel combination of Rosuvastatin and Lovastatin in nanoparticulate form to assess atherosclerosis model. The ionic gelation method was found to be useful for the preparation of chitosan nanoparticles which are evaluated for different parameters like morphology, zeta potential, and polydispersity index found to be satisfactory for delivery of drug. Application of statistical model confirms the effect of Poloxamer 188 and chitosan on drug release and Encapsulation efficiency and drug release. The drug release of Rosuvastatin-Lovastatin nanoparticles (RST-LST-NP) was compared with individual Rosuvastatin nanoparticles (RST-NP) and Lovastatin nanoparticles (LST-NP). The results obtained from serum uric acid analysis as well as measurement of level of total cholesterol, high density cholesterol, low density cholesterol shows superiority of RST-LST-NP over RST-NP and LST-NP in lowering and maintaining serum lipoprotein levels.
INTRODUCTION
Atherosclerosis (also called as arteriosclerosis) is one of the major causes of mortality and morbidity through world. When arteries of the blood vessel thickens, hardens due to deposition of plaque (fatty deposits) and clogs the arteries, the condition is called as atherosclerosis. This progressive disease is identified by the accumulation of lipidic substance and fibrous elements in large arteries [1]. The deposits are composed of cholesterol, fatty substances and also cellular waste products. Initially the disease shows no symptoms, but when worsens can lead to severe conditions. Depending upon the type of artery affected in particular organ [2], different conditions may precipitate like coronary heart disease, angina or peripheral heart disease, chronic kidney disease. The early lesions also called as 'Foam Cells'. It is a disease of an arterial wall primarily infecting susceptible sites of major arteries. A Lipid retention, oxidation and modification can trigger the chronic inflammation causing thrombosis or stenosis or thrombotic occlusion [3].
One of the main underlying reasons behind atherosclerosis is
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. This change in permeability leads to migration of lipids (LDL-C particles), into wall of arteries [4]. When monocytes circulating in blood adhere to endothelial cells leads to expression of adhesion molecules such as vascular adhesion molecule-1 (VCAM-1) and selectin which migrate to sub-endothelial space and get converted into foamy macrophages. Accumulation of intracellular cholesterol gets enhanced through LDL particles in sub-endothelial spaces. This binds to native and modified lipoprotein and anionic phospholipids. The end result of this cascade is vascular modification [5,6]. The cascade can be expected to happen in a sequence like narrowing of vessels with some conditions of angina pectoris and acute coronary syndrome because of instability of plaque. Main reason underlying formation of thrombi is plaque rupture and/ or erosion, sometimes calcified nodules [6]. Vulnerable plaque contains monocytes, macrophages and T-cells. The initial step for treatment mostly includes reduction in risk through some changes in lifestyle such as dietary, enhanced physical activity; avoid Micro Nano Lett. 2021;1-8. wileyonlinelibrary.com/iet-mnl 1 primary as well as secondary smoking. But when these changes are not sufficient to control the disease the Lipoprotein mediated therapy is utmost required [6]. Statins or HMG-CoA reductase inhibitors are the choice of drug for treatment of disease. Statin is a prevention and treatment therapy for CHD (coronary heart disease), hypertension, and cerebrovascular disease [7]. These statins causes increased expression of LDL receptors (LDL-R) on hepatocytes surface resulting in enhanced uptake of DL-C from circulation blood. This reduces the total plasma level of LDL-C along with apo-B containing lipoproteins, including TG-rich particles [6,8]. In different trials statin therapy has shown its efficacy in lowering blood LDL levels. HMG CoA reductase inhibitors (or statins), a new class of lipid-lowering compounds, have raised expectations for more widespread use than that of the older lipid lowering drugs. Not only are they more effective in lowering LDL cholesterol, but they are better tolerated as well. No data exist concerning the effect of statins on early carotid atherosclerosis and clinical events in men and women who have moderately elevated LDL cholesterol levels but are free of symptomatic cardiovascular disease [8].
Rosuvastatin, a newer lipid lowering agent, is used largely in day to day clinical practice. Not only strong potential of lipid lowering but its great efficacy towards reducing inflammatory reaction, normalizing vascular endothelial function with plaque stabilization is some of the prime reasons for selection of drug [9]. It has been also stated that it can activate PPAR-γ via activation of extracellular signal regulated protein kinase with increased DNA binding activity. However its whole and sole effect on human monocytes is not clear [9,10]. The drug was recently approved in US to delay progression of atherosclerosis to lower low density lipoprotein cholesterol (LDL-C) and total cholesterol (TC) to target levels. It is reported to have more lipid lowering efficacy than any other drug of statin family. Some literature has reported that significant reduction in disease was seen with Rosuvastatin 40 mg/day in coronary heart disease patients which supports lipid lowering potential of Rosuvastatin. It is a well tolerated, lipid lowering agent have potential in delaying atherosclerosis in primary prevention patients [11]. Lovastatin, another drug from class of statins, used to reduce risk of heart attack and stroke along with decreasing level of cholesterol (a fat like substance) from blood so as to reduce incident of atherosclerosis [12]. The drug acts through slowing the production of cholesterol and reduce total amount in body. Lowering blood cholesterol levels along with less fat deposits proves helpful in prevention of atherosclerosis [13].
Therapeutic efficacy of most drugs was greatly increased by employing nanoparticulate drug delivery system. The great platform has been provided for large number of drug. The inherent ultra small dimensions of nanoparticles makes it very easy get accumulate into host cell and exhibit its effect [14]. The distinct features of nanoparticles like biocompatibility, protection of nuclei degradation, in vivo delivery of drug to cell makes it a choice of dug for the most researchers [15]. The sustained release pattern of drug release, high loading capacity, and higher surface to volume ratio makes them suitable to carry large range of drug [15,16]. Nanoparticles have uniquely appealing features that enable them to be programmed as cell and tissue-specific delivery systems, thus overcoming the low drug delivery and offtarget effects that commonly impede developments in cardiovascular medicine [17]. The present investigation deals with formulation of novel nanocarrier system for most possible efficient delivery of Rosuvastatin and Lovastatin to control and maintain the level of Lipoproteins in Experimental animals.
Animal models
Adult male New Zealand Rabbits of weight ranging from 3 to 4.5 kg were selected for conduction of experiment. The animals were cage such that they are having free and easy access to food and water. The adaptation period of 8-10 days was given to every animal prior conduction of experiment. The guidelines of animal ethical committee were strictly followed during conduction of entire investigation.
Application of statistical design using 3-factor, 2-level full factorial
After performing preliminary demonstration of experimental trials it was found that concentration of chitosan, TPP and Poloxamer have visibly significant influence on drug release, encapsulation efficiency. Therefore these three parameters are considered to be independent parameters. A 3-factor, 2-level full factorial design applied optimises the nanoparticles and to find out the effect of independent variables on dependent variables percentage encapsulation efficiency and particle size. Independent variables and different levels are represented in Table 1.
Development of Rosuvastatin-Lovastatin nanoparticles (RST-LST-NP)
The method of choice used to formulate RST-LST-NP was ionic gelation method using chitosan (CS) as coating material. The co-acerbates having nano size range were formed by interaction of positively charged amino group in CS with negatively charged Tripolyphosphate (TPP). The pre-weighed quantities of RST (5 mg) and LST (10 mg) were dissolved separately in solu- Table 2. By using similar method the nanoparticles of RST and LST were prepared individually for comparison in animals study.
Particle size, zeta potential, polydispersity index and yield of nanoparticles
The suspension of RST-LST-NP was prepared by dispersing about 5 g of RST-LST-NP into 15 ml of distilled water and sonicated the solution 3-5 min. A 10 time's dilution was performed using double distilled water. The drop of such suspension was mounted on foil paper, dried and examined under Scanning Electron microscopy (SEM) at operating distance of 8-8.5 mm. Zetasizer (Malvern instruments DTS Ver 4.10) was used to measure Particle size, Zeta potential, Polydispersity index (PDI) [18,19].
RST-LST entrapment by CS nanoparticles (entrapment efficiency)
The pre-weighed quantity of RST-LST loaded CS nanoparticulate suspension was centrifuged at 10,000 rpm for 25 min at room temperature until clear supernatant is obtained. This clear liquid was separated by decanting the solvent. The concentration of non-entrapped or free drug for both RST and LST separately can be determined from this supernatant liquid by using spectrophotometric analysis [20]. The RST-LST entrapped in CS nanoparticles was calculated using following equation: In vitro drug release of RST-LST from nanoparticles The release pattern of both drugs was studied individually using same experimental technique. The dialysis bag was used as membrane replica with molecular cut-off of 5 kDa to observe release of RST-LST from RST-LST-NP's in pH 7.4 phosphate buffer solutions (PBS). About 15 mg of RST-LST-NP's were weighed and redispersed and placed into 150 mL of PBS solution in beaker with temperature of 370 • C. The whole system was assembled on magnetic stirrer. A 5 mL of sample aliquot was removed from buffer solution and again replenish with fresh PBS solution to maintain sink condition. The amount of RST in sample taken was analyzed using UV spectrophotometry at 308 nm. The same procedure was repeated for estimation of LST using maximum wavelength 248 nm [21].
Development of atherosclerotic animal model
New Zealand Rabbits were procured from People's Hospital of Chongqing Shuangqiao Economic and Technological Development Zone, Chongqing, 400900, China. All experiments were performed according to guidelines of the Principles of Laboratory Animal Care and the Guide for Care and Use of Laboratory Animals. The animals were chosen randomly (3-4.5 kg) and divided into different groups as follow: the first group consists of 30 animals that were fed with regular diet (RD; n = 30), second group consist of animals kept on fat rich die (FRD; n = 40). Animals in FRD group were additionally supplied with intra-gastric (i.g.) Vitamin D3 (60 IU/kg) up to 3 days followed by a dose of 5 mL/kg of high fat emulsion containing pyrimidine twice daily by intra-gastric administration. This pyrimidine contains 200 g pork, 200 g cholesterol, 20 g bile salts, and 10 g propyl-thiouracils, dissolved in 1 L distilled water. Simultaneously animals were kept on equal volume of normal saline and fed with normal diet. The animals were housed individually suitable environmental conditions at temperature of 20-22 • C with relative humidity of 45-55% using 12 h of light dark cycle. After induction of atherosclerosis animals were subdivided as follow. About ten number experimental animals were chosen from group RD and were kept as negative control (RD-N; n = 10). The animals in FRD group (animals induced with atherosclerosis) were subdivided into four groups each group containing ten animals as follow: The atherosclerotic animals receiving treatment of RST-LST-NP's were put in T-RLN (n = 10) while another group is merely treated with RST-NP's only (T-RST; n = 10). The animals in third group were treated with LST-NP's only (T-LST; n = 10) and last group of animals received no treatment (NT; n = 10)
Determination of serum uric acid level
Animals were fasted for the period of at least 8 h and injected with 10% chloral hydrate to induce anaesthesia. Blood samples were collected after 4 h at room temperature and centrifuged at 3500 rpm for 15 min. Serum was separated (Sample 1). Uric acid levels were determined.
Determination of serum lipid level
The serum lipid levels (total cholesterol, LDL, HDL) were determined from sample 1 using auto-analyser (Toshiba, Japan).
Histological evaluation
After completion of the experiment the aorta was harvested, cut open in half portion and fixed in buffered formalin solution.
The ring of aorta were cut at 4 μm and stained with haematoxylin and eosin (HE) staining.
Formulation of development of Rosuvastatin-Lovastatin nanoparticles (RST-LST-NP)
The nanoparticles of RST-LST-NP and RST-NP, LST-NP were prepared using ionic gelation method employing chitosan as coating material. The particles having nano size range were formed by interaction of positively charged amino group in CS with negatively charged tripolyphosphate (TPP). A CS is natural polymer and the nanoparticles prepared are highly acceptable due to its enhanced biocompatibility, non immunogenicity and non toxicity. The availability of drug concentration using nanoparticles in serum is also increases by use of CS as polymeric material. The surface characteristic of such chitosan prepared RST-LST-NP were examined using Scanning electron microscopy (SEM) as shown in Figure 1.
The spherical shape of nanoparticles with smooth surface showing no cracks and crevices has been observed. After drying of nanoparticles the free flowing nature of RST-LST-NP can be observed. The particle size is very crucial criteria for drug delivery. It has been observed that smaller the size of particle larger will be engulfment by cell and more will be therapeutic action. The zeta potential is one of the important parameter for kinetic stability of nanoparticle. The range of potential in between −25 and +32 mV is observed to show good repulsive force to attain better physical stability of dispersion or suspension. The zeta potential for RST-LST-NP was found to be this range and hence considered suitable administration. The size range of 97-210 nm was found to be best suited for intra-articulate administration to treat atherosclerosis. The average size range of nanoparticle is one of prime set of consideration making RST-LST-NP very effective in passive targeting of drug. PDI of RST-LST-NP was found to be reliable and within acceptable range that is less than 0.3 the values clearly indicates homogeneity of nanoparticulate dispersion. The large value PDI might be result of longer chains of chitosan so as to prevent aggregation of nanoparticles. A quick glimpse at the results obtained showed that good stability with respect to PDI, zeta potential and particle size. Table 3 summarizes all physical parameters of RST-LST-NP. The percent encapsulation efficiency of RST-LST-NP's was calculated using free (non entrapped) drug in supernatant solution. The percentage encapsulation efficiency of RST was found between 43.67% (RL 7) and 58.67% (RL 4) while the same value for LST was 41.98% (RL 7)-57.34% (RL 4). A direct proportion can be found between concentration of (polymers), that is, CS, Poloxamer 188. From results one can predict that higher the concentration of polymers presents larger will be encapsulation of drug. Sometimes the EE can be varied with change in concentration of independent variables. EE of both drug when used with CS showed good results at +1 level of both polymers, that is, CS and poloxamer and also with TPP. The highest encapsulation efficiency was found to be 57.34% for LST and 58.67% for RST. The polynomial equation obtained dafter application of statistical design for % EE (Y1) for RST can be given by: The similar equation can be derived for % EE (Y1) for LST as per follow: Y1 = +45.10 + 20.56A + 7.56B + 9.91C … … … … … . (2) Here Y2 is % EE, A denotes concentration of Chitosan (%); B is TPP concentration (%) and C is concentration of Poloxamer 188 (%). The model F value of 489.12 for above data indicated significance of model. A synergistic effect is indicated if the value in the equation is positive while negative value represents antagonistic effect. The suggested model for Y2 is Linear as shown in Table 4. The correlation coefficient (R 2 ) for Y2 was found to be 0.8612 indicating model as good fit. The p value was found to be < 0.001 indicating statistical significance of model.
Also among the three independent variables selected all factors A (p<0.0001), B (p = 0.0022), C (p = 0.0001) as shown in Table 5. Higher concentration of chitosan imparts higher ionic gelation resulting in enhanced entrapment efficiency. The low EE was attributed to lesser amount of CS and TPP used during the nanoparticles formulation. It was general concept that, increased EE may be due to the higher concentration of polymer with respect to the amount of the drug used during the nanoparticles formulation. Higher concentration of the polymer forms the denser mass of the drug polymer dispersion which helps to entrap the drug molecule in nanoparticles. Nanoparticles prepared with high cross linking agent showed increase in the EE. Here Y2 is drug release, A denotes the concentration of Chitosan (%); B is the TPP concentration (%) and C the is concentration of poloxamer 188 (%).
Determination of change in uric acid (UA) level after administration of RST-LST-NP's
The serum uric acid (UA) levels were analysed using autoanalyser. In animal group of RD (regular diet) the uric acid level was found to be normal, that is, 160 μmol/L while in experimental group received no treatment (group NT) the peak level of UA was found (210 μmol/L). The experimental group who receive treatment with Rosuvastatin nanoparticles (T-RST) and Lovastatin (T-LST) only showed slightly lower levels of UA (180 and 175 μmol/L respectively) than NT group. The significant lowering of UA levels was observed in treatment group received treatment with RST-LST-NP's, that is, 165 μmol/L. the values obtained above were put in graphical format (Figure 3) showed that RST-LST-NP's have significantly more potent in lowering serum Uric acid levels [29].
Determination of change in lipid profile after administration of RST-LST-NP's
In elaborating lipid profile of from serum sample, the concentration of total cholesterol (TC), Low density lipoprotein (LDL),high density Lipoprotein (HDL) were determined. The study population in group RD showed no significant rise in level of any lipid. The concentrations were found to be in the normal range (i.e TC = 1.6 μmol/L, LDL = 0.37 μmol/L and HDL = 1.46 μmol/L). A very sharp and significant change in level of lipid profile was observed in experimental group NT. All lipid levels were remarkably increased. The TC was found to be 3.2 μmol/L, LDL was 0.99 μmol/L and HDL level was 2.1μmol/L. These values are high enough to cause mortality due to atherosclerosis. The animals in treatment group receiving RST-LST-NP's, the levels of different lipids were as follow: TC was 1.72 μmol/L, HDL was 1.51 μmol/L and LDL was 0.43 μmol/L. The values obtained are very close to the normal lipid profile of animals. The values of TC, HDL, and LDL in group treated with RST nanoparticles were found to be 1.97, 1.8, 0.59 μmol/L respectively while for treatment group of LST nanoparticles the values are 2.1, 1.75, 0.63 μmol/L respectively. Although the both nanoparticles (RST-NP and LST-NP) were found to be effective in slight lowering of lipidic profile they were not as efficient as combination nanoparticles (RST-LST-NP's). Therefore the results obtained proved the efficiency of RST-LST-NP's in lowering the lipid content in blood serum sample.
Histological Evaluation
From Histological evaluation it can be observed that no significant changes in aortas of RD group were observed. Large no of foam cells following cell infiltration and nuclear condensation was observed in case of NT group (see Figure 4(A)). However no nuclear condensation or calcium deposits, no cholesterol or fat deposits were observed for treatment group T-RLN group Figure 4(B)). Slight amount of foam cells can be observed but are not to significant extent to generate atherosclerosis.
CONCLUSION
In present experiment a successful establishment of animal model to evaluate the effect of RST-LST-NP's on atherosclerosis is done successfully. Thus formulated RST-LST-NP's nanoparticles were found to be very effective in lowering the uric acid levels and maintaining the level of different Lipoproteins (TC, HDL, and LDL) when compared to non treatment group. The synergistic action of RST and LST can also be confirmed by combining them in RST-LST-NP's. This action was confirmed by comparing their effect with RST-NP and LST-NP individually. The results obtained can assure the efficiency of RST-LST-NP's in lowering and maintaining lipoprotein levels in atherosclerotic model. | 4,729.2 | 2021-05-19T00:00:00.000 | [
"Biology"
] |
Voltammetric Determination of Ferulic Acid Using Polypyrrole-Multiwalled Carbon Nanotubes Modified Electrode with Sample Application
A polypyrrole-multiwalled carbon nanotubes modified glassy carbon electrode-based sensor was devised for determination of ferulic acid (FA). The fabricated sensor was prepared electrochemically using cyclic voltammetry (CV) and characterized using CV and scanning electron microscope (SEM). The electrode shows an excellent electrochemical catalytic activity towards FA oxidation. Under optimal conditions, the anodic peak current correlates linearly to the FA concentration throughout the range of 3.32 × 10−6 to 2.59 × 10−5 M with a detection limit of 1.17 × 10−6 M (S/N = 3). The prepared sensor is highly selective towards ferulic acid without the interference of ascorbic acid. The sensor applicability was tested for total content determination of FA in a commercial popcorn sample and showed a robust functionality.
Introduction
Ferulic acid (FA) is a superior antioxidant among many ubiquitous polyphenolic acids in the plant kingdom. Ferulic acid plays a major role in protecting cell constituents; therefore, its longer circulation in the blood stream is of great interest. Moreover, FA is actively used as an anti-inflammatory,
OPEN ACCESS
anti-aging, and antithrombotic drug [1]. Additionally, it alleviates oxidative stress as well as decreases glucose levels in diabetic rats [2]. For all aforementioned facts, it is of significant importance to develop a simple, effective, fast, and low-cost method for the determination of ferulic acid.
Several analytical methodologies are available for the determination of ferulic acid concentration, i.e., high-performance liquid chromatography (HPLC) [3] and spectrophotometry [4]. Such instrumental methods depend on multistep sample cleanup procedures, which are time-consuming, pricey, and of complex setup. However, the electroanalytical approach has become an alternative method for analysis of ferulic acid due to many reasons, such as high selectivity, quick response, low cost, possible miniaturization, and ease of operation. Recently, several types of sensors were developed for the determination of FA, such as the L-cysteine self-assembled monolayers (SAM) modified gold electrode [5], carbon paste electrode [6,7], multiwalled carbon nanotube-modified glassy carbon electrode (MWCNTs/GCE) [8], multiwalled carbon nanotubes decorated with MnO2 nanoparticle-modified GCE [9], and reduced graphene oxide/GCE [10]. Moreover, a rapid electrochemical detection of ferulic acid using a graphene sensor was also introduced [11]. For the direct quantitative determination of FA, a clever and more sensitive sensor based on a poly(diallyldimethylammoniumchloride) functionalized graphene-modified GCE was introduced [1]. Nevertheless, it remains a fruitful challenge to build an electrochemical sensor based on novel nano-materials for performing effortless, sensitive, and fast detection of ferulic acid.
Composite materials based on conjugation of conducting polymers and CNTs were shown to possess properties of individual components with a harmonic effect. Numerous efforts were committed to design and prepare new polymer-CNT composites, which exhibit new features in specific applications. Peng et al. studied the fabrication of composite films of CNTs with polyaniline (PANI), polypyrrole (PPy) or poly [3,4-ethylenedioxythiophene] (PEDOT) via electrochemical co-deposition methods, using a solution of acid-treated CNTs and the corresponding monomer. The electrochemically synthesized composite films have a porous structure at the micro-and nano-meter scales in common. In addition, they have better mechanical integrity, higher conductivity, and greater stability compared to pure conducting polymers [12]. It was shown that a polypyrrole carbon nanotube (PPy-CNTs) composite exhibits very different electronic properties compared to the sole composites of PPy and CNTs. Furthermore, composite films of CNTs with polypyrrole are well suited for gas sensor applications [13].
Throughout our experiment, pyrrole was selected as a starting monomer due to low cost and ease of electrochemical polymerization. The synthesized polypyrrole (PPy) has a substantial interest for its promising applications in sensing. Our choice of the PPy is based on good electronic and mechanical properties as well as high structural stability [14][15][16][17]. This research is a continuation of our previous work on the electrochemical study of antioxidants [18][19][20][21][22][23]. The aim is to establish a PPy-CNTs/GCE-based electrochemical sensor for the determination of FA. The composite electrode under study was prepared electrochemically and characterized using cyclic voltammetry (CV) and scanning electron microscope (SEM). Adsorptive square wave voltammetry is applied to validate the applicability of the sensor for the determination of total phenolic content and concentrations of FA in commercial popcorn. Electrochemical properties of the bare GC and PPy-MWCNTs/GC electrodes were studied for 1.0 mM K3Fe(CN)6 solution, in 0.2 M KCl, using the cyclic voltammetric method at a scan rate of 20 mV/s. Cyclic voltammograms were performed and the results were depicted in Figure 2. K3Fe(CN)6 shows a pair of well-defined redox peaks with an enhanced peak current (5.29 μA) at PPy-MWCNTs/GCE compared to the bare GCE (2.91 μA), which enhanced the peak current by 1.82-folds. This indicates that PPy-MWCNTs film increases the active surface area of the electrode. Moreover, peak current enhancement accompanies the peak potential positive shift with peak potential separation, ΔEpa, of 10 mV. The preceding results confirm the abnormal electrocatalytic activity of the PPy-MWCNTs composite film. Our introduced modification increases the surface roughness and the effective surface area compared to the bare GCE. Roughness of the electrode surface of the PPy-MWCNTs/GC electrode is calculated by using the cyclic voltammetric Randles-Sevcik Equation (1):
Characterization of the Modified Electrode (PPy-MWCNTs/GCE)
where ip is the peak current (A), n is the number of electrons, A is the active surface area (cm 2 ), D is the diffusion coefficient (cm 2 /s), C is the concentration (mol/cm 3 ), and v is the scan rate (V/s). For determining the ratio of the PPy-MWCNTs/GCE to the bare GCE, i.e., the apparent surface active area, we need the data of the K4Fe(CN)6 cyclic voltammograms ( Figure 2). The apparent surface active area ratio is found to be 1.31, which is similar to the results of Babaei et al. [24]. Electrochemical properties of the bare GC and PPy-MWCNTs/GC electrodes are also studied using ferulic acid. Cyclic voltammetric behavior of 3.22 × 10 −5 M FA on bare GC and the MWCNTs/GC electrodes was performed in 0.2 M H3PO4 (pH 2.54) at a scan rate of 20 mV s −1 and recorded in Figure 3. A broad oxidation and reduction waves at peak potentials Epa of 850 mV and Epc of 300 mV, respectively, are observed at the bare GCE, while at MWCNTs/GCE and PPy-MWCNTs/GCE, sharp waves are observed at 754 mV. The peak potential separation at bare GCE (ΔEp = 550 mV) is higher than that at the modified one, which is (ΔEp = 254 mV). This reveals that catalytic oxidation of the phenol groups of ferulic acid takes place on the modified electrodes at a less positive potential than that in the case of the bare electrode. As a result, an over-potential decrease by 96 mV and improvement of the electrochemical reversibility are obtained.
Square Wave Voltammetry of Ferulic Acid and Optimal Conditions of Accumulation
Voltammtric determination of ferulic acid was carried out, making use of the square wave adsorptive stripping method (SWAdsSV) on the modified glassy carbon electrode, PPy-MWCNTs/GCE. The SWAdsSV is an effective and rapid electroanalytical technique with advantages, including good discrimination against background currents and low detection limits. For the electrochemical determination of FA, optimal conditions such as accumulation potential, accumulation time, pH, and square frequency for
Effect of Accumulation Potential
Effect of accumulation potential on the oxidation peak current of FA was examined at different deposition potentials in the potential range of −0.60 to 0.20 V at a frequency of 10 Hz, a deposition time of 60 s, and a pH of 2.54. An increase of the oxidation peak current is observed on varying the accumulation potential from −0.60 V to a maximum value at −0.40 V. On further increase of the electrode potential to 0.20 V, a decrease in peak current is obtained. Therefore, the accumulation potential of −0.40 V is selected as the optimal value for subsequent experiments (Figure 4a).
Effect of Accumulation Time
To evaluate the effect of accumulation time for the determination of FA, SWVs at PPy-MWCNTs/GCE at different accumulation times are performed. SWVs of FA were recorded at different times from 30 s to 180 s at a frequency of 10 Hz, deposition potential of −0.4 V, and pH of 2.54. The oxidation peak current increases on increasing time from 30 s to 60 s, then a decrease of the peak current is observed on any further increase of time. This suggests that the amount of adsorbed FA on the modified electrode surface reaches a maximum value at 60 s ( Figure 4b). Therefore, the accumulation time of 60 s is selected for further studies.
Effect of pH
To study the effect of pH on the determination of FA, SWVs at PPy-MWCNTs/GCE at different pH values were carried out at a frequency of 10 Hz, deposition potential of −0.40 V, and deposition time of 60 s. The behavior was examined at a pH range of 2.54 to 5.51. At a pH of 2.54, the peak current amounts, ip, to 4.28 µA. On increasing the pH of the solution to 3.51, ip increases slightly and on further increase, it decreases sharply (Figure 4c). Thus, pH 3.5 is chosen in our work.
Effect of Square Wave Frequency
In order to decrease the determination limit of FA using the proposed method, SWVs were performed at a deposition potential of −0.40 V, pH of 3.5, and deposition time of 60 s, and the frequencywas changed from 5 Hz to 40 Hz (Figure 4d). A linear relationship was obtained between the frequency and peak current, and a frequency of 30 Hz was chosen to be the measurement frequency.
Calibration Curve and Detection Limit
The square wave voltammograms obtained at PPy-MWCNTs/GCE in 0.2 M H3PO4 solution at optimal conditions (pH 3.5, accumulation potential −0.40 V, accumulation time 60 s and frequency 30 Hz) on successive additions of ferulic acid were recorded. Figure 5 shows the typical SWVs for FA at different concentrations. The oxidation peak current, ip, of ferulic acid increases linearly with an increasing concentration from 3.32 × 10 −6 to 2.59 ×10 −5 M. This is represented by the following linear regression Equation (2): ip (A) = 6.79 × 10 −6 + 0.255 CFA (µM), r = 0.996 (2) From the analytical data, the lower limit detection value for FA is determined to be 1.17 × 10 −6 M (S/N = 3). This is ascribed not only to the large surface area of the PPy-MWCNTs/GCE, but also to the adsorption of FA, as well as the electro-catalytic effect of the modified electrode.
Interferences
The influence of interference was evaluated. Ascorbic acid was added into the cell containing 7.96 × 10 −6 M FA. Experimental results showed that a 100-fold concentration of ascorbic acid did not interfere with the determination of ferulic acid (signal change below 5%). The results indicated that the present constructed sensor of PPy-MWCNTs/GCE was adequate for the determination of ferulic acid.
Analytical Application
In order to evaluate the validity of the sensor based on Ppy-MWCNTs/GCE for the determination of ferulic acid in a real sample, the total content of FA in commercial popcorn is determined with the proposed method using the adsorptive square wave voltammetric technique. The total content of FA in the commercial popcorn sample was determined by using the standard addition method with a standard solution of FA, under the same procedure described earlier at the optimal parameters. Successive amounts of standard FA solution were added to the test solution and then the voltammograms are recorded. Typical results are depicted in Figure 6, where the solid SWV represents the blank signal and the real sample. The total content of ferulic acid of the dried commercial popcorn sample is expressed as milligrams of ferulic acid equivalents (FAE) per g. It is estimated to be 3.26 mg/g.
Treatment of MWCNTs
MWCNTs were sonicated with a mixture of H2SO4 and HNO3 (3/1, volume ratio) for 6 h to remove impurities, reduce bundle sizes, and to generate functional groups on their surface. The treated MWCNTs were then washed several times with bi-distilled water until the washing was neutral and then dried at about 70 °C. Mixture of 5 mg treated MWCNTs, 30 mg sodium dodecyl sulfate, 1 mL N,N-dimethylformamide, and 1 mL ethanol was ultrasonically mixed for 4 h to form a stable black suspension.
Preparation of PPy-MWCNTs/GCE
The polypyrrole-multiwalled carbon nanotubes/glassy carbon electrode (PPy-MWCNTs/GCE) was prepared and used as the working electrode. Platinum wire was used as counter electrode. A silver/silver chloride (KCl, 1 M) electrode was employed as the reference electrode. Polypyrrole film doped with CNTs on glassy carbon electrode PPy-MWCNTs/GCE was fabricated by adding a certain amount of the treated MWCNTs suspension (50 µL) to a 1.0 × 10 −4 M distilled pyrrole solution in 0.2 M H3PO4 (pH 2.54) under ultrasonic stirring. The reaction mixture was electrolyzed using two different techniques for comparison. First, an anodic potential of 0.95 V versus Ag/AgCl was applied for 30 s as described by Peng et al. [12]. Second, cyclic voltammetric polymerization in a potential range of 0.1 to 1.7 V for five cycles at scan rate of 20 mV s −1 was employed as illustrated in Figure 7. The details of electroplymerization by cyclic voltammetry have been described elsewhere [25][26][27]. The solutions were degassed by bubbling dry N2 gas for 10 min prior to polymerization and a N2 atmosphere maintained during the experiments. The prepared PPy-MWCNTs/GC electrode was washed repeatedly with bi-distilled water and methanol to remove the electrolyte and the monomer. The prepared PPy-MWCNTs/GCE was then transferred into 0.2 M H3PO4 solution blank and subsequent voltammetric cycling for five cycles was conducted for cleaning at room temperature. For comparison, the electrochemical activity of FA was tested on the two different fabricated electrodes by potentiostatic and potentiodynamic polymerizations, the voltammograms of FA were identical on both. Potentiodynamic technique has been utilized for the electropolymerization of PPy-MWCNTs-modified electrode.
Instruments
The electrochemical experiments were performed using an Autolab PGSTAT128N Potentiostat/Galvanostat (Eco Chemie BV, Utrecht, The Netherlands). Electrochemical analyzer was operated via NOVA 1.10 software (Metrohm Autolab B.V., Utrecht, The Netherlands). NOVA is a software package designed to control Autolab instruments with USB interface. Electrochemical measurements were performed using bare or modified glassy carbon electrode as working electrode. Glassy carbon electrode was freshly polished to a smooth surface using fine grades of SiC paper (0.5 µm) and washed with bi-distilled water. After sonication cleaning in water for 2 min, the electrode was modified. Between measurements, the electrode surface was polished and sonicated to get a clean surface and to avert any possible problems from the adsorption of FA oxidation reaction products onto the electrode surface. Thus, a renewal working electrode was done before each measurement. Surface morphologies examination of polypyrrole and polypyrrole-MWCNTs composite films was performed using scanning electron microscope, JOEL (JSM T200, Tokyo, Japan) with electron beam energy of 15 kV. For this purpose, deposition of thin layer of gold (50 Ǻ) was carried out using physical vapour deposition. pH measurements were performed using a HI 2210 benchtop pH-meter (HANNA Instruments, Bucharest, Romania) with a combined pH reference electrode.
Sample Treatment
Accurate weight of dried powder of popcorn sample (4.0 g) was transferred into a 50 mL conical flask, and then ethanol was added and sonicated for 2 h at 50 °C. The mixture was cooled to room temperature and filtered off through a Whatman paper (No. 1). Dilution of the filtrate to 8 mL with ethanol was done. Then 50.0 μL of aliquot sample solution was added directly to 15.0 mL of H3PO4 in the electrochemical cell and the square wave voltammogram was recorded.
Conclusions
The glassy carbon electrode-modified with polypyrrole-multiwalled carbon nanotubes (PPy-MWCNTs/GCE) was prepared and used as a working sensor for the detection of ferulic acid. Cyclic and square wave voltammetric techniques were used to investigate the electrochemical behavior of ferulic acid. It was shown that PPy-MWCNTs/GCE exhibits remarkable electrochemical effects towards the oxidation of FA. The modified electrode, PPy-MWCNTs/GCE, enhances the oxidation peak current of FA. The results can be applied for the adsorptive stripping voltammetric determination of FA concentrations in a commercial sample of popcorn. | 3,721.2 | 2015-10-16T00:00:00.000 | [
"Materials Science"
] |
Simulating the Volcanic Sulfate Aerosols From the 1991 Eruption of Cerro Hudson and Their Impact on the 1991 Ozone Hole
The Chilean volcano Cerro Hudson erupted between August 8th and 15th, 1991, injecting between 1.7 and 2.9 Tg of SO2 into the upper troposphere and lower stratosphere. We simulate this injection using the Goddard Earth Observing System Earth system model with detailed sulfur chemistry and sectional aerosol microphysics, focusing on the resulting aerosols and their contribution to the 1991 Antarctic Austral Springtime ozone hole. The simulations show a column ozone deficit (12 DU) in the Southern Hemisphere vortex collar region. The majority of this effect is between 10 and 20 km and due to heterogeneous chemistry. The model shows a 26% decrease in ozone from background levels at these altitudes, compared with in‐situ observations of a 50% decrease. Above 20 km, the dynamical response to the eruption also causes lower ozone values, a novel modeling result. This experiment highlights potential interactions between proposed solar radiation management geoengineering aerosols and volcanic eruptions.
Introduction
While the August 8th to 15th Cerro Hudson (45°S, 72°W) volcanic eruption produced the fifth largest sulfur dioxide (SO 2 ) emissions ever observed by satellites, it was overshadowed by the June 15th Mount Pinatubo (15°N , 120°E) eruption (Carn et al., 2016).The Cerro Hudson injection is estimated to have put 1.7-2.9Tg of SO 2 and a similar amount of ash between 16 and 18 km (Bluth et al., 1992;Constantine et al., 2000;Miles et al., 2017).The ash and SO 2 quickly separated, with about 90% of the ash falling out in the first few days following the eruption, settling across South America (Constantine et al., 2000).The SO 2 was observed by the Total Ozone Mapping Spectrometer (TOMS) (Bluth et al., 1992) and High Resolution Infra-Red Radiation Sounder/2 (HIRS/2) (Miles et al., 2017) satellite instruments and remained in the lower stratosphere between 50°S and 70°S as it circled the Earth (Figures 1b and 1c; Doiron et al., 1991;Schoeberl et al., 1993).The satellite-borne Microwave Limb Sounder, also capable of retrieving SO 2 , came online in September 1991, and while it was able to make useful observations of the tropical Pinatubo plume, produced noisy results at the altitude and latitude of the Cerro Hudson plume (Miles et al., 2017).
On September 10th and daily after September 20th, the Cerro Hudson plume was observed above McMurdo Bay by both lidar and balloon-borne optical particle counter (Deshler et al., 1992;Hofmann & Oltmans, 1993).The low altitude of the volcanic aerosol layer, between 9 and 13 km, combined with the presence of freshly nucleated aerosols, indicated that this aerosol was from Cerro Hudson.The Pinatubo plume, on the other hand, was detected only above 17 km.Deshler et al. (1992) and Hofmann & Oltmans, 1993 found coincident low ozone measurements at 12 km, where ozone concentrations are normally not impacted by seasonal depletion.They reported an ozone loss rate of 4-8 ppb day 1 over 30 days following September 24th, totaling ∼50% ozone depletion compared to years with comparable PSC-induced ozone loss.Despite these in-situ observations of aerosols and anomalous ozone values, satellite observations did not show the Cerro Hudson SO 2 entering the vortex before the beginning of September (Krueger et al., 1992).Trajectory model results also suggested that the volcanic plume remained outside of the vortex during the period of major ozone depletion (Krueger et al., 1992;Schoeberl et al., 1993).
Studies of more recent volcanic eruptions occurring in the Southern midlatitudes show that even moderately sized eruptions at these latitudes can impact the springtime ozone loss (Solomon et al., 2016;Zhu et al., 2018).In the case of the April 2015 Calbuco eruptions (Zhu et al., 2018), observations and models show that transport of volcanic aerosols into the vortex occurred as early as May, allowing them to alter polar stratospheric clouds and the chemistry of springtime ozone depletion.Despite a later eruption date, Deshler et al. (1992) showed that Cerro Hudson aerosol appeared at high Southern latitudes and potentially impacted ozone in September and October.
We simulated the 1991 Cerro Hudson eruption using the Goddard Earth Observing System (GEOS) model coupled with a sectional aerosol microphysics module, the Community Aerosol and Radiation Model for Atmospheres (CARMA), and the tropospheric and stratospheric chemistry module GEOS-Chem (GC).Here we show that (a) the GEOS model reasonably reproduces the satellite and balloon-borne in-situ observations of the Cerro Hudson plume, (b) Cerro Hudson aerosol reached high Southern latitudes and impacted ozone below 20 km while remaining outside the vortex above 20 km, and (c) the dynamical response to these aerosols resulted in a more persistent vortex and lower ozone values above 20 km.
Materials and Methods
Goddard Earth Observing System is an Earth system model based on the architecture of the Earth System Modeling Framework (Hill et al., 2004;Molod et al., 2015).In this study, we use the atmospheric general circulation model (AGCM) configuration in its "free-running" mode, in which the model calculates its own meteorology without any data assimilation and with imposed sea surface temperatures based on observations.The GEOS system has been shown to perform well in stratospheric chemistry and transport processes (Douglass et al., 2012;SPARC CCMVal, 2010;Strahan et al., 2011).We run GEOS at a ∼100 km horizontal resolution on a cubed-sphere grid with 72 hybrid-sigma vertical levels extending from the surface to ∼80 km.While the GEOS AGCM can be coupled to various aerosol modules, here we are using the sectional aerosol microphysics from CARMA (Bardeen et al., 2008;Colarco et al., 2014;Toon et al., 1988).We have coupled CARMA to the GEOS-Chem tropospheric and stratospheric chemistry mechanism (Bey et al., 2001)-GEOS-Chem calculates the production of H 2 SO 4 gas, which CARMA then uses to calculate the aerosol microphysics across a range of size Cerro Hudson is initialized in the model with an injection of 2.7 Tg of SO 2 between 16 and 18 km in the grid column above the volcano, spread out over 24 hr on the day of the largest eruption (August 15).We use Pinatubo injection parameters similar to those in Mills et al. (2017): we inject 10 Tg SO 2 over 25 hr on 15 June 1991, uniformly mixed from 18 to 21 km altitude between 0°and 15°N over a 1-degree wide longitude region centered at 120°E.This configuration is identical to the simulations presented and validated in Case et al. (2023).
Results and Discussion
The GEOS model (Figure 1a) shows similar transport and SO 2 magnitude to that observed by TOMS (Figure 1b) and HIRS/2 (Figure 1c) for the 6 days after the Cerro Hudson eruption.An average peak value of 109 DU in the model is slightly lower than the 130+ DU peak in observations.By the time the plume returned to the longitude of the volcano, the modeled plume has a peak SO 2 column of 16 DU while columns as high as 50 DU were observed by both instruments.The horizontal extent of the plume is wider in the model, indicating the lower peak values are in part due to the spatial resolution of the model compared to the observations.The model-calculated late-September zonal mean aerosol size distribution at 11 km at the latitude of McMurdo Bay, Antarctica (78°S), shows an enhancement by a factor of three in the total particle number concentration in the ensemble that includes Cerro Hudson.Particle size also increases when Cerro Hudson is included, from an effective radius of 0.12-0.16μm, representative of young volcanic plumes.This magnitude of enhancement in the number concentration and size of aerosols is consistent with the anomalous aerosol layer observed at 11 km by Deshler et al. (1992) above McMurdo Bay on September 27th relative to earlier balloon flights (Figure 2).
Starting in late September, the ensemble including Cerro Hudson has a 5%-10% larger ozone hole area, defined as the area inside the 220 DU contour, continuing throughout October (Figure 3).It should be noted that the modelcalculated ozone hole area is larger than observed 1991 values due to a model low bias of polar ozone.The recovery of ozone values in the ensemble including Cerro Hudson is slowed by an average of 13 days throughout October.The extra ozone hole area in the simulations with Cerro Hudson is primarily driven by lower ozone values in the "collar region," defined here as the longitudinal ring around the Ozone Hole between 55°S and 65°S.The model shows 20%-40% lower ozone in the collar region between 10 and 25 km when Cerro Hudson is included (Figure 4c).
The development of this low-ozone collar region in the ensemble including Cerro Hudson is coincident with the start of lower temperatures when compared with the ensemble without Cerro Hudson (Figure 5).The ensembles meaningfully diverge starting in late September.While there is a small amount of volcanic aerosol surface area from Pinatubo in the region prior to this divergence, once the ensembles diverge, there is less aerosol surface area between 20 and 25 km in the ensemble including Cerro Hudson (Figures 5a and 4), despite a higher volcanic aerosol loading between 10 and 20 km in that ensemble (Figures 5b and 4).
The differences in ozone concentrations above 20 km were not driven by a change in the heterogeneous component of the ozone chemistry, evidenced by the lack of volcanic aerosol at that altitude (Figure 5a).Ozone depletion at these altitudes mainly occurs during August and September where only small aerosol enhancements are noted.The lower temperatures compared to the background at 20-25 km in both ensembles point instead to a dynamical perturbation as the causal factor for lower ozone values.By comparison, below 20 km (Figure 5b), the additional aerosol surface area in the ensemble including Cerro Hudson is coincident with the lower ozone values, indicating extra heterogeneous activation of ozone depleting substances.
Ozone near the edge of the vortex is driven primarily by tropospheric wave forcing of the stratosphere (Newman et al., 2004).Eddy heat flux (Figure 6a), the product of temperature and the meridional wind component anomalies from the zonal mean, is proportional to planetary wave energy that propagates vertically into the stratosphere (Edmon et al., 1980) and has been shown to drive temperature and ozone concentrations near the edge of the vortex by controlling vertical motions in the polar lower stratosphere (Newman et al., 2001(Newman et al., , 2004)).Negative values from September 13-19 (3-5 days before the ensembles diverge in ozone in the collar region) indicate a wave event in the ensemble without Cerro Hudson across the three members.In the Southern Hemisphere, a negative eddy heat flux means increased downward motion in the vortex collar, increasing temperatures and ozone in the polar lower stratosphere.This suggests that the radiative impact of the Cerro Hudson aerosol layer acts to reduce tropospheric wave activity propagating into the stratosphere, resulting in a colder, lower ozone collar region above the layer of Cerro Hudson aerosols.Radiative heating of the Cerro Hudson aerosols in the midlatitudes also increases the temperature gradient at the altitude of the Antarctic vortex, associated with a stronger zonal wind (Figure 6b).This results in the longer-lasting isolated vortex in the ensemble including Cerro Hudson.
Conclusions
The free-running GEOS model shows that the direct radiative impact of the 15 August 1991, Cerro Hudson eruption may have altered the dynamics of the Southern Hemisphere, suppressing tropospheric wave propagation into the stratosphere.This suppression cools the Antarctic lower stratosphere, increases the temperature difference between low latitudes and the pole, and strengthens the vortex.The suppression also results in a slower breakdown of the Antarctic vortex than would have otherwise occurred.Ultimately, this dynamical forcing of the lower stratosphere in the model results in lower ozone within the collar region in late September and October above 20 km.While this dynamical impact is consistent across all ensemble members in this study, a larger ensemble size is needed to strengthen these findings.
Goddard Earth Observing System includes coupling between the volcanic aerosols and chlorine and bromine activation, but the model simulations show little volcanic aerosol present above 20 km during the peak ozone hole depletion period.Based on these results, the aerosols from Pinatubo and Cerro Hudson did not directly change ozone chemistry in the 1991 Antarctic vortex above 20 km.The modeled Cerro Hudson aerosols did penetrate the Antarctic stratospheric region in a layer near 15 km (below the primary PSC-driven ozone depletion region).At this altitude, surface area in the model is increased by more than an order of magnitude, where it causes some additional ozone depletion.
The combined dynamical and chemical impacts cause 12 DU less column ozone in the collar region with respect to the GEOS background ensemble-increasing the ozone hole area by an average of 9% in October.Between 20 and 25 km, local ozone is decreased by an average of 1.3 DU/km (0.37 ppmv, 11%) in October from background levels.The region between 10 and 20 km in the model shows a decrease of 1.5 DU/km (0.11 ppmv, 26%).Ozone in this region was observed to decrease by 50% by (Deshler et al., 1992), consistent with both the satellite observations of ozone from late September through October 1991 (Krueger et al., 1992) and the in-situ aerosol and ozone measurements of (Hofmann & Oltmans, 1993).We have focused here on the impact on the 1991 ozone hole, but the increasing aerosol surface area in the collar region in Figure 5 show that the Pinatubo aerosols may have impacted the ozone hole in subsequent years, as has been shown by modeling and observational studies (Hofmann & Oltmans, 1993;Knight et al., 1998;Stenchikov et al., 2002).
Finally, the Cerro Hudson impact is deeply convolved with the Pinatubo impact.The GEOS simulations show an anomalously cold and persistent vortex prior to the impact of Cerro Hudson in late September.In the collar region specifically, ozone concentrations were 2.5 DU/km (0.71 ppmv) lower than the background and had recovered to background values from the Pinatubo-caused anomaly above 20 km by October 16th.A similar lag is seen between 10 and 20 km.Temperature in the collar region in August is 7.8 K colder in both ensembles when compared to background values indicating Pinatubo's role in the stability of the Antarctic vortex.These results could also have relevance to potential geoengineering scenarios, highlighting the potential interactions between solar radiation management aerosols and volcanic eruptions.The divergent chemical and radiative impacts of Pinatubo and Cerro Hudson shown here need to be studied in the context of volcanic eruptions occurring during an ongoing geoengineering scheme.(b) (Red) Southern hemisphere poleward temperature gradient, calculated as the temperature difference between the region between 15°S and 30°S and the region between 60°S and 90°S, between 10 and 30 km. (Blue) Southern hemisphere vortex zonal wind speed, defined as the wind spead averaged from 10 to 20 km, 40°-60°S.
Figure 1 .
Figure 1.Seven-day composite of one instance of model calculated (a) and satellite retrieved Total Ozone Mapping Spectrometer, (b), and HIRS/2, (c) SO 2 column concentrations as the Cerro Hudson plume transits the Southern midlatitudes.The observations from August 15th-21st are shown moving clockwise around Antarctica. (c) is from Miles et al., 2017.
Figure 2 .
Figure 2. Balloon-borne optical particle counter observations above McMurdo Bay on September 27th (red Xs), compared with model-calculated late-September zonal-mean cumulative aerosol size distributions at the latitude of McMurdo for the ensemble including Cerro Hudson (blue dots with error bars) and the ensemble excluding Cerro Hudson (black dots with error bars).Error bars represent the 95% confidence interval across each ensemble.
Figure 3 .
Figure 3. Ozone hole area (defined as the area contained by the 220 DU total ozone value) in the Goddard Earth Observing System model for the ensemble including Cerro Hudson (solid blue), ensemble excluding Cerro Hudson (dashed blue), and the background excluding both eruptions (solid red).
Figure 4 .
Figure 4. Zonal mean ozone concentrations and sulfate aerosol surface area densities (cm 2 m 3 ) in the Goddard Earth Observing System modeled Southern Hemisphere (DU/km).In filled contours are the ozone fields and in white contours are the aerosol surface area fields for the ensemble including Cerro Hudson (a), the ensemble excluding Cerro Hudson (b), and the difference (c).The dotted white line on each figure indicates the vortex edge.The dotted red lines indicate the upper and lower collar regions.
Figure 5 .
Figure 5. Modeled temperature (red), ozone (blue), and surface area (black) anomalies with respect to the background versus time in the polar vortex collar region (55°S to 65°S) for: (a) 20-25 km, and (b) 10-20 km.Zero values indicate that ensembles are similar to the background ensemble, while negative anomalies indicate colder temperatures, lower ozone concentrations, and lower surface area.MERRA-2 temperatures are shown in pink.
Figure 6 .
Figure 6.(a) Southern hemisphere eddy heat flux for the ensemble mean with Cerro Hudson (solid line) and the ensemble mean without Cerro Hudson (dashed line).The shaded area indicates days when the ensemble members do not overlap.(b)(Red) Southern hemisphere poleward temperature gradient, calculated as the temperature difference between the region between 15°S and 30°S and the region between 60°S and 90°S, between 10 and 30 km. (Blue) Southern hemisphere vortex zonal wind speed, defined as the wind spead averaged from 10 to 20 km, 40°-60°S. | 3,952.2 | 2024-03-05T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Asymptotic correlation functions and FFLO signature for the one-dimensional attractive spin-1/2 Fermi gas
We investigate the long distance asymptotics of various correlation functions for the one-dimensional spin-1/2 Fermi gas with attractive interactions using the dressed charge formalism. In the spin polarized phase, these correlation functions exhibit spatial oscillations with a power-law decay whereby their critical exponents are found through conformal field theory. We show that spatial oscillations of the leading terms in the pair correlation function and the spin correlation function solely depend on $\Delta k_F$ and $2\Delta k_F$, respectively. Here $\Delta k_F=\pi(n_{\uparrow}-n_{\downarrow})$ denotes the mismatch between the Fermi surfaces of spin-up and spin-down fermions. Such spatial modulations are characteristics of a Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state. Our key observation is that backscattering among the Fermi points of bound pairs and unpaired fermions results in a one-dimensional analog of the FFLO state and displays a microscopic origin of the FFLO nature. Furthermore, we show that the pair correlation function in momentum space has a peak at the point of mismatch between both Fermi surfaces $k=\Delta k_F$, which has recently been observed in numerous numerical studies.
I. INTRODUCTION
Bardeen-Cooper-Schrieffer (BCS) theory was formulated over 50 years ago as a microscopic theory for superconductivity. One of the ingredients in BCS theory is pairing between electrons with opposite momenta and spins, i.e., matching between the Fermi energies of spin-up and spin-down electrons. In the phase where the system is partially polarized, Fermi energies of spin-up and spin-down electrons become unequal. This leads to a non-standard form of pairing which was predicted independently by Fulde and Ferrell [1], and Larkin and Ovchinnikov [2]. Fulde More recently, theoretical predictions of the existence of an FFLO state in one-dimensional (1D) interacting fermions [3,4] have emerged by employment of various methods, such as Bethe ansatz (BA) [5,6], density-matrix renormalization group (DMRG) [7][8][9][10][11], quantum Monte Carlo (QMC) [12], mean field theory [13][14][15][16] and bosonization [17]. At finite magnetization, it was found by Feiguin and Heidrich-Meisner [7] that pair correlations for the attractive Hubbard model in a parabolic trapping potential has a power-law decay of the form n pair ∝ cos(k FFLO |x|)/|x| α and the momentum pair distribution has peaks at the mismatch of the Fermi surfaces k FFLO = π(n ↑ − n ↓ ). Wave numbers for the oscillations were numerically found as π(n ↑ − n ↓ ) for the pair correlation function and as 2π(n ↑ − n ↓ ) for the density difference n ↑ − n ↓ [8]. The FFLO pairing wave number was also confirmed by the occurrence of a peak in the pair momentum distribution corresponding to the difference between the Fermi momenta of individual species [9,12]. From mean field theory, it was demonstrated that the FFLO phase exists in the large-scale response of the Fermi gas [15] and even for temperatures up to 0.1T F [14].
On the other hand, critical behavior of 1D many-body systems with linear dispersion in the vicinities of their Fermi points can be described by conformal field theory. Some time ago, the critical behavior of the Hubbard model with attractive interaction was investigated by Bogoliubov and Korepin [18][19][20][21]. They showed that 1D superconductivity occurs when the average distance between electron pairs is larger than the average distance between individual electrons of these pairs. This means that the correlation function for the single particle Green's function decays exponentially, i.e., ψ † n,s ψ 1,s → e −n/ξ with ξ = v F /∆ and s =↑, ↓, whereas the singlet pair correlation function decays as a power of distance, i.e., Here ∆ is the energy gap, and the critical exponents ξ and θ are both greater than zero. This criterion is met when the external magnetic field is small, i.e., H < H c . Once the external field exceeds the critical value, i.e., H > H c , Cooper pairs are destroyed. Thus both of these correlation functions decay as a power of distance and the pairs lose their dominance, i.e., electrons become more or less independent of each other.
So far, theoretical confirmation of the FFLO state in 1D still relies on numerical evidence of spatial oscillations in the pair correlations. Despite key features of the T = 0 phase diagram [5,6,[22][23][24][25] for the attractive Fermi gas were experimentally confirmed using finite temperature density profiles of trapped fermionic 6 Li atoms [28], the unambiguous theoretical confirmation and experimental observation of FFLO pairing is still an open problem. As remarked in Ref. [9] that the 1D FFLO scenario proposed in Ref. [17] does not apply to 1D attractive fermions where quantum phase transition from the fully-paired phase into the spin polarized phase does not belong to commensurate-incommensurate university class, also see Refs. [22,26]. For 1D attractive spin-1/2 fermions with polarization [3,4], the low-energy physics of the homogeneous system is described by a two-component Tomonaga-Luttinger liquid (TLL) of bound pairs and excess unpaired fermions in the charge sector and ferromagnetic spin-spin interactions in the spin sector [27]. In this paper, we determine the critical behavior of the single particle Green's function, pair correlation function and spin correlation function within the context of a TLL. We show that the long distance asymptotics of various correlation functions provide a microscopic origin of FFLO pairing for 1D attractive fermions.
This paper is organized as follows. We derive finite-size corrections for the ground state energy of the system in Section II. In Section III, we derive finite-size corrections for lowlying excitations and introduce the dressed charge formalism. Integral equations for each component of the dressed charge matrix is solved analytically in the strong coupling limit |c| ≫ 1. In Section IV, we derive correlation functions for different operators and discuss the signature of FFLO pairing. Finally, conclusions and remarks are made in Section V.
II. GROUND STATE AND FINITE-SIZE CORRECTIONS
We consider N f fermions with SU(2) spin symmetry in a 1D system of length L with periodic boundary conditions. The Hamiltonian for the spin-1/2 Fermi gas [3,4] is given by where c < 0 is the attractive interaction strength. This model is one of the most important exactly solvable quantum many-body systems. In recent years, it has attracted considerable attention from theory [5,6,[22][23][24][25] and experiment [28] due to evidence of the FFLO state.
Systems exhibiting novel phase transitions at T = 0 are particularly useful in studying TLL physics [27] and the nature of the FFLO state.
The quasimomenta for unpaired fermions and bound pairs are given by k j and Λ α ± ic ′ which satisfy the BA equations where quantum numbers I j and J α are given by Here c ′ = c/2, and N u and N b denote the number of unpaired fermions and bound pairs, respectively. The energy and momentum for this system reads We define monotonic increasing counting functions z L u (k j ) := I j /L and z L b (Λ α ) := J α /L and re-label the variables k → k u , λ → k b , I j → I u,j and J α → I b,α so that we can express the root densities in a general form as where a n (k) is defined by Here k α,j (for j = 1, 2, . . . , N α and α = u, b) denote the BA roots for unpaired fermions and bound pairs in the ground state.
Using the Euler-Maclaurin formula for contributions up to O(1/L 2 ) when L ≫ 1, the finite-size corrections to the root densities can be written in the generic form as Here, the Fermi points are denoted by ±Q α . Notice that K(k) is a symmetric matrix.
In order to calculate finite-size corrections for the ground state and low energy excitations, we introduce the thermodynamic Bethe ansatz (TBA) [29,30], which provides a powerful and elegant way to study the thermodynamics of 1D integrable systems. It becomes convenient to analyze phase transitions and low-lying excitations in the presence of external fields at zero temperature. In the thermodynamic limit, the grand partition function is Z = tr(e −H/T ) = e −G/T , where the Gibbs free energy is given by G = E − HM z − µn − T S, and is written in terms of the magnetization H, the chemical potential µ and the entropy S [30]. Equilibrium states satisfy the condition of minimizing the Gibbs free energy with respect to particle and hole densities for the charge and spin degrees of freedom (more details are given in Refs. [22,[30][31][32][33]). At zero temperature, the ground state properties are determined by the dressed energy equations where ε 1D many-body systems are critical at T = 0 and exhibit not only global scale invariance but local scale invariance too, i.e., conformal invariance. The conformal group is infinite dimensional and completely determines the conformal dimensions and correlation functions when the excitations are gapless [34]. Conformal invariance predicts that the energy per unit length has a universal finite-size scaling form that is characterized by the dimensionless number C, which is the central charge of the underlying Virasoro algebra [35,36]. From the density distributions (9) and dressed energy equations (11), the finite-size corrections to the ground state energy is given by where C = 1, and v u and v b are the velocities of unpaired fermions and bound pairs, respectively. They are defined as where prime denotes the derivative with respect to k α and p α (k α ) = lim L→∞ 2πz L α (k α ). The term ε ∞ 0 represents the ground state energy in the thermodynamic limit, i.e., N, L → ∞. In the strong coupling limit, exact expressions for the velocities can be found in Refs. [22,37].
III. LOW-LYING EXCITATIONS AND DRESSED CHARGE EQUATIONS
Critical phenomena of critical systems are described by finite-size corrections for their low-lying excitations. The method we use to study correlation functions of the spin-1/2 Fermi gas with attractive interaction follows closely the method set out in Refs. [40][41][42][43].
The conformal dimensions of two-point correlation functions can be calculated from the elements of the dressed charge matrix Z. Long distance asymptotics of various correlation functions are then examined through the dressed charge formalism at the T = 0. Three types of low-lying excitations are considered in the calculations of finite-size corrections.
Type 1 excitation is characterized by moving a particle close to the right or left Fermi points outside the Fermi sea. It is equivalent to changing the quantum numbers I α,j close to I ± α for unpaired fermions (α = u) and bound pairs (α = b). I ± α characterize the Fermi points of each Fermi sea and are given by I + α = I max α + 1/2 and I − α = I min α − 1/2. The change in total momentum from Type 1 excitations is and the change in energy is Here N + α ≥ 0 (N − α ≥ 0) stems from the change in distribution of quantum numbers close to the right (left) Fermi points. This type of excitation is commonly known as particle-hole excitation.
Type 2 excitation arises from the change in total number of unpaired fermions or bound pairs. It is characterized by the change in quantum numbers i.e., ∆N α = N excited On the other hand, Type 3 excitation is caused by moving a particle from the left Fermi point to the right Fermi point and vice versa. This type of excitation is also known as backscattering. It is characterized by the quantum numbers while leaving ∆N α unchanged.
All three types of excitations can be unified in the following form of the finite-size cor-rections for the energy and total momentum of the system Here we use the notations The dressed charge equations are a set of four coupled integral equations that read Quantum numbers ∆D u and ∆D b (18) are chosen based on the conditions given in Eq. (4) and also on the conditions that ∆D u ≡ ∆N u /2 (mod 1) and ∆D b ≡ ∆N b /2 (mod 1).
Combining both conditions together with the definition given in Eq. (18) yields When the external magnetic field H is smaller than the critical field, spin excitations for this model are gapped. Once H exceeds this critical field, spin excitations become gapless When T = 0, the correlation functions of 1D systems decay as the power of distance, but when T > 0 they decay exponentially. Following the standard calculations in Ref. [43], the conformal dimensions are given by where N ± α (α = u, b) characterize the descendent fields from the primary fields. General two-point correlation functions at T = 0 take the form The exponential oscillating term in the asymptotic behavior comes from Type 3 excitations, The functions a n (k) have leading order 1/|c|, hence we can ignore all terms that have two or more multiples of a n (k). This procedure yields Substituting Eq. (30) into Eq. (22), we obtain ≈ 1 (32) Eq. (24) at the Fermi point Q u then reads where n f = N f /L is the density of fermions per unit length, γ = c/n f is the dimensionless interaction parameter and P = (N ↑ − N ↓ )/N f = N u /N f is the polarization. Inserting these relations into the expressions for dressed charges, we obtain In FIG. 1, the dressed charges are numerically calculated and plotted against polarization for different values of interaction strength |γ|.
In the strong coupling limit, the external magnetic field H is related to the polarization as H ≈ n 2 |γ| 2 2 + 2π 2 n 2 P 2 1 + With this relation, we can evaluate the dressed charges for different values of H. From the expressions for the dressed charges in Eq. (37), the conformal dimensions ∆ ± α in terms of polarization are given by
IV. CORRELATION FUNCTIONS AT ZERO TEMPERATURE
Here we consider 4 types of correlation functions, namely the single particle Green's function G ↑ (x, t), charge density correlation function G nn (x, t), spin correlation function G z (x, t), and pair correlation function G p (x, t). Each correlation function is derived based on the choice of ∆N u and ∆N b .
The one particle Green's function, which is also called the Fermi-field (FF) correlation function in some literature, decays exponentially when the external magnetic field is not strong enough to overcome the gap associated with the breaking of bound states [18][19][20][21].
Once in the gapless phase, i.e., when H c1 < H < H c2 where H c1 and H c2 are the critical fields mentioned in Ref. [22], every correlation function at zero temperature decays spatially as some form of power law [34-36, 38, 39]. G ↑ (x, t) is characterized by (∆N u , ∆N b ) = (1, 0) which in turn allows quantum numbers ∆D u ∈ Z + 1/2 and ∆D b ∈ Z + 1/2. The leading terms are then given by where the critical exponents are given by The first term in G ↑ (x, t) comes from (∆D u , ∆D b ) = (1/2, −1/2) and the second term Next we consider the charge density correlation function G nn (x, t) together with the spin correlation function G z (x, t). Both of these correlation functions are characterized by the set of quantum numbers (∆N u , ∆N b ) = (0, 0) which allows quantum numbers ∆D u ∈ Z and ∆D b ∈ Z. The leading terms are given by G nn (x, t) = n(x, t)n(0, 0) where the operators n(x, t) and S z (x, t) are given in terms of the fields as The critical exponents for asymptotic expressions of G nn (x, t) and G z (x, t) are The constant terms for G nn (x, t) and G z (x, t) come from the choice of quantum numbers (∆D u , ∆D b ) = (0, 0). The second, third and fourth terms arise from the choices (1, 0), (0, 1) and (−1, 1), respectively.
Finally we consider the pair correlation function G p (x, t). This correlation function is characterized by the set of quantum numbers (∆N u , ∆N b ) = (0, 1) which allows quantum numbers ∆D u ∈ Z + 1/2 and ∆D b ∈ Z. The leading terms are where the critical exponents are given by whilst the second term arises from the choice (∆D u , ∆D b ) = (1/2, −1).
The leading order for the long distance asymptotics of the pair correlation function G p (x, t) oscillates with wave number ∆k F , where ∆k F = π(n ↑ − n ↓ ). Meanwhile, the leading order for the spin correlation function G z (x, t), which can also be thought of as the correlation of the density difference between spin-up and spin-down fermions, oscillates twice as fast with wave number 2∆k F . The oscillations in G p (x, t) and G z (x, t) are caused by an imbalance in the densities of spin-up and spin-down fermions, i.e., n ↑ − n ↓ , which gives rise to a mismatch in Fermi surfaces between both species of fermions. These spatial oscillations share a similar signature as the Larkin-Ovchinikov (LO) pairing phase [2]. Our findings of the wave numbers agree with those discovered through DMRG [7][8][9], QMC [12] and mean field theory [14]. Though from conformal field theory, we see clearly that the spatial oscillation terms in the pair and spin correlations are a consequence of Type 3 excitations, i.e., backscattering for bound pairs and unpaired fermions. A comparison between our results and the results from numerical methods in Refs. [7][8][9]12] suggest that the coefficient A p,1 is very much larger than the coefficient A p,2 because the frequency of the oscillations in numerical studies of G p (x, t) is almost identical to π(n ↑ − n ↓ ). This observation also applies to G z (x, t), where A z,2 and A z,3 are much smaller when compared with A z,1 .
The correlation functions in momentum space can be derived by taking the Fourier transform of their counterparts in position space. From Refs. [42,43], the Fourier transform of equal-time correlation functions of the form where ∆ ± = ∆ ± u + ∆ ± b is given by The conformal spin of the operator is s = ∆ + −∆ − and the exponent ν is expressed in terms of the conformal dimensions as ν = 2(∆ + + ∆ − ) − 1.
Hence the equal time correlation functions near the singularities k 0 for the one particle Green's function, charge density, spin and bound pairs are where the exponents are given by We would like to stress that the momentum space correlation functions derived in Eqs. (52)-(55) are only accurate when the momenta k are within the proximity of the wave numbers k 0 , i.e., when k ≈ k 0 . FIG. 2 plots G p (k) against k as polarization P varies between 0 to 0.8.
This figure is in qualitative agreement with the ones given in Refs. [7,9,12]. We stress again that our plot is accurate only within the vicinity of the singularity, i.e., when k approaches π(n ↑ − n ↓ ). We plotted G p (k) for the entire domain k ∈ (0, π) so that readers can visualize the curves more easily.
V. CONCLUSION
In conclusion, we investigated various zero-temperature correlation functions for the spin-1/2 Fermi gas with attractive interaction. We derived the finite-size corrections for ground state and low-lying excitations of the model. Using conformal field theory, critical exponents of the correlation functions were given in terms of polarization and interaction strength.
We found that the leading terms of the pair correlation function and the spin correlation function oscillate with frequencies π(n ↑ − n ↓ ) and 2π(n ↑ − n ↓ ), respectively. We also found that backscattering between the Fermi points of bound pairs and unpaired fermions results in a 1D analog of the FFLO state and displays a microscopic origin of the FFLO nature.
Furthermore, we showed that there is a peak in the pair correlation function in momentum space at k = π(n ↑ − n ↓ ) which confirms the oscillation frequency.
In the spin polarized phase, these correlation functions exhibit spatial oscillations with a power-law decay. This critical behaviour can be viewed as an analogy to long range order in 1D, i.e., the power law decay of the pair correlation function which is regarded as evidence of a superconducting/superfluid state. We also like to mention that from the dressed charge formalism, the asymptotic behavior of the correlation functions derived in this paper can be numerically obtained with high accuracy for arbitrary interaction strength. Additionally, by considering weakly perturbed inter-tube interactions or inter-lattice interactions (1D fermionic Hubbard model), quasi-1D correlations in the spin polarized phase can be calculated from perturbation theory [19]. This provides a promising opportunity to estimate the critical temperature for high-Tc superconductors/superfluids by studying 1D to 3D trapped cold atoms. | 4,972.8 | 2011-05-19T00:00:00.000 | [
"Physics"
] |
Fuzzy Analogues of Sets and Functions Can Be Uniquely Determined from the Corresponding Ordered Category: A Theorem
In modern mathematics, many concepts and ideas are described in terms of category theory. From this viewpoint, it is desirable to analyze what can be determined if, instead of the basic category of sets, we consider a similar category of fuzzy sets. In this paper, we describe a natural fuzzy analog of the category of sets and functions, and we show that, in this category, fuzzy relations (a natural fuzzy analogue of functions) can be determined in category terms—of course, modulo 1-1 mapping of the corresponding universe of discourse and 1-1 re-scaling of fuzzy degrees.
Introduction
Category theory is one of the main tools of modern mathematics.Many mathematical theories can be naturally described in terms of a directed graph, where vertices are objects studied in this theory (e.g., sets is set theory, topological spaces in topology, linear spaces in linear algebra), and edges relate different objects: e.g., functions map one set into another, continuous mappings map one topological space into another one, linear mapping map one linear space into another ones, etc.The corresponding graph is known as a category; see, e.g., [1].
In precise terms, a category is a tuple (Ob, Mor, :, id, •), where: • Ob is the set whose elements are called objects, • Mor is a set whose elements are called morphisms, • : Mor → Ob × Ob is a mapping that assigns, to each morphism f ∈ Mor a pair of objects (a, b) ∈ Ob × Ob; this is denoted by f : a → b; the object a is called f 's domain, and b is called f 's range; • id is a mapping that assigns, to each object a ∈ Ob, a morphism id a : a → a; and • • is a mapping that assigns, to each pair of morphisms f : a → b and g : b → c for which the range of f is equal to the domain of g, a new morphism g • f : a → c so that for every f : a → b, Because of its universal character, category theory plays an important role in modern mathematics [1].Many new mathematical concepts are defined in category terms, and many original concepts are re-formulated in category terms -such a reformulation in very general terms often enables mathematicians to generalize their ideas and results to a more general context.
As we have mentioned, different areas of mathematics can be described in terms of different categories: • Set theory is naturally described in terms of a category Set in which objects are sets and morphisms are functions.
• Topology is described in terms of a category Top in which objects are topological spaces and morphisms are continuous mappings.
• Linear algebra is naturally described in terms of a category Lin, in which objects are linear spaces, and morphisms are linear mappings, etc.
What happens in the fuzzy case?If we allow fuzzy sets (see, e.g., see, e.g., [2,7,16,17,19]), what is a natural analog of the category Set?In the category Set, morphisms from a to b are functions.In the crisp case, for each function f : a → b and for each element x ∈ a, we have a unique value of Fuzzy means that for each x ∈ a, instead of a single value y = f (x) ∈ b, we may have different possible values y ∈ b, with different degrees of confidence.In general, we can have all possible values y ∈ b.For each x ∈ a and for each y ∈ b, we have a degree R f (x, y) ∈ [0, 1] to which y is a possible value of f (x).Thus, a natural fuzzy analog of a function is a fuzzy relation.
Composition g • f of fuzzy relations f : a → b and g : b → c can be defined in the usual way.Namely, we want to know, for each pair of elements x ∈ a and c ∈ z, to what extent there exists a y ∈ b for which f brings us from a to b and g brings us from y to z.If we interpret "and" as min and there exists (an infinite "or") as max, then the above description translates into the following formula: Since we have fuzzy relations, there is no need to explicitly describe the domain of each morphism: if for some x ∈ a, the value f (x) is not defined, this simply means that for this x, we have R f (x, y) = 0 for all y ∈ b.Similarly, there is no need to describe the range, Thus, without losing generality, we can assume that we have only one object -the universal set U, and that the relation R f (x, y) is defined for all x ∈ U and y ∈ U. Morphisms are then fuzzy relations, with the usual composition relation (1).
Need for an ordered category.In the crisp case, every property is either true or false.
As we gain more information, we may get more confident in our knowledge.For example, we may start with the situation in which, for a given x, several different values f (x) are possible, but after acquiring new information, we are becoming more and more confident that there is only one possible value y 0 of f (x).This means that for the remaining value y 0 , the degree of possibility R f (x, y 0 ) remains the same, but for all y = y 0 , the corresponding degree R f (x, y) decreases.To capture this phenomenon, it is reasonable to supplement the category structure with the corresponding component-wise ordering between fuzzy relations (morphisms What we do in this paper.In this paper, as an answer to the above questions, we present an axiomatic description of fuzzy sets in the language of categories, with a proof of the soundness of this description.
Results
Towards a precise formulation of the problem.It is easy to see that if we have a 1-1 mapping π : U → U of the Universe of discourse U onto itself (i.e., a bijection), then the corresponding transformation R(x, y) → R(π(x), π(y)) is an automorphism of the corresponding category in the sense that it preserves the identity, composition, and order.
Similarly, if we have a 1-1 monotonically increasing mapping ) is also such an automorphism.Indeed, since we only consider order between degrees, monotonic transformation of degrees should not change anything.
It turns out that modulo this simple equivalence, we can uniquely determine all the elements x ∈ U and all the relations R(x, y) from the ordered category, i.e., in precise terms, that every automorphism is a composition of the automorphisms of the above two types.The proof of this result will be based on an explicit description of elements of U and relations R f (x, y) in category terms.
Let us describe the problem in precise terms.
Definition 1.By an ordered category, we mean a category in which for every two objects a and b, there is a partial order ≤ on the set Mor(a, b) of all morphisms from a to b.
Definition 2. Let U be a set; we will call it the Universe of discourse.By a U-fuzzy ordered category, we mean an ordered category in which: • the only object is the set U, • morphisms are fuzzy relations, i.e., mappings R : • the morphism id is defined as the mapping for which id(x, x) = 1 and id(x, y) = 0 for x = y, • the composition of morphisms is defined by the formula
and
• the order between the morphisms is the component-wise order: f ≤ g means that f (x, y) ≤ g(x, y) for all x and y.
The U-fuzzy ordered category will be denoted by F U .
Comment.One can easily see that this is indeed a category, i.e., that the composition of morphisms is associative, and the composition of any morphism f with the identity morphism id is equal to f : • for all f and g, we have G( f • for all a, we have G(id a ) = id F(a) , and • for all f and g, we have f ≤ g if and only if G( f ) ≤ G(g).
Comment.This definition is a natural generalization of the standard definition of automorphism of categories (see, e.g., [6,15,18]) to ordered categories.
Proposition.Let π : U → U be a bijection of U, and let H : ) is an automorphism of the category F U .
Our main result is that these are the only automorphisms of the category F U .
Theorem.For every set U, every automorphism of the ordered category F U has the form G π,H for some bijection π : U → U and for some monotonic bijection Comment.This may not be very clear from the formulation of the result, but the proof will show that we can determine elements of the set U and values of the mappings f (x, y) in category terms, i.e., we can indeed define fuzzy relations -a natural fuzzy analogue of functions -in category terms.
Proof of the Proposition
This proposition is easy to prove: a permutation π does not change anything, and the increasing bijection does not change the order.
Proof of the Theorem
1 • .First, we can describe the morphism f 0 for which f 0 (x, y) = 0 for all x and y in ordered-category terms, as the only morphism f for which f ≤ g for all morphisms g.Indeed, clearly f 0 ≤ g for all g.Vice versa, if f ≤ g for all g, then, in particular, f ≤ f 0 , i.e., f (x, y) ≤ f 0 (x, y) = 0 for all x and y, and since f (x, y) ∈ [0, 1], this means that indeed f (x, y) = 0 for all x and y.
2 • .Let us first characterize all the morphisms f = f 0 for which the set {g : g ≤ f } is linearly ordered.
Since an automorphism preserves order, every automorphism maps such morphisms into morphisms with the same property.
Specifically, we will prove that a morphism has this property if and only if we have f (x, y) > 0 only for one pair (x, y), and we have f (x , y ) = 0 for all other pairs (x , y ).
Indeed, one can easily check that for such morphisms f , the only morphisms g ≤ f are the morphisms which also have g(x , y ) = 0 for all pairs (x , y ) = (x, y).Such morphisms g are uniquely described by the corresponding value g(x, y).For every two such morphisms g and g , depending on whether g(x, y) ≤ g (x, y) or g (x, y) ≤ g(x, y), we have g ≤ g or g ≤ g, i.e., the set {g : g ≤ f } is indeed linearly ordered.
Vice versa, let us prove that if a morphism has this property, then it has f (x, y) > 0 only for one pairs (x, y).Indeed, if we have f (x, y) > 0 and f (x , y ) > 0 for two different pairs (x, y) = (x , y ), then we would be able to construct two different morphisms g ≤ f and g ≤ f for which g ≤ g and g ≤ g.Namely, we take: • g(x, y) = f (x, y) > 0 and g(x , y ) = 0 for all pairs (x , y ) = (x, y), and • g (x, y) = f (x , y ) > 0 and g(x , y ) = 0 for all pairs (x , y ) = (x , y ).
This contradicts our assumption that the set {g : g ≤ f } is linearly ordered.
3 • .Let us now describe, in ordered-category terms, morphisms f for which f (x, x) > 0 for some a ∈ U and f (x , y ) for all other pairs (x , y ) = (x, x).
Indeed, out of all morphisms described in Part 2 of this proof, such morphisms can be determined by the additional condition that f • f = f .This condition is clearly satisfied for such morphisms, while for morphisms for which f (x, y) > 0 for some b = a, the composition f • f is, as one can see, identically 0 and thus, different from f .
Hence, if we have an automorphism, elements are mapped into elements in a 1-1 way, i.e., indeed we have a bijection of the Universe of discourse.
5
• .Let us now show that the degrees from the interval [0, 1] can also be described -modulo increasing bijections of this interval -in ordered-category terms.
5.1 • .Indeed, for each element a ∈ U, different degrees v ∈ [0, 1] can be associated with different morphisms f described in Part 3 of this proof, i.e., morphisms for which: • f (x, x) > 0 for this element a and • f (x , y ) for all pairs (x , y ) = (x, x).
Different degrees are then simply associated with different values
This construction provides us with degrees at each element a ∈ U. To get a general description of degrees, we need to relate the values corresponding to different elements x, x ∈ U.
5.2 • .Let us denote, by f x,v , the morphism for which: We want, for every a = b, to connect the values v and w corresponding to functions f x,v and f y,w .This connection comes from the following auxiliary result: Indeed, by definition of a composition, the values of the composition g • f cannot exceed the largest value of each of the composed relations g and f .Thus, if Vice versa, if w ≤ v, then we can take the following morphisms f x→y and f y→x : • f x→y (x, y) = w and f x→y (x , y ) = 0 for all other pairs (x , y ) = (x, y), and, similarly, • f y→x (y, x) = w and f y→x (x , y ) = 0 for all other pairs (x , y ) = (y, x).
In this case, as one can easily check, we have 5.3 • .Now that we know how to describe the relation w ≤ v for functions f x,v and f y,w in ordered-category form, we can describe equality v = w between the degrees v and w corresponding to morphisms f x,v and f y,w as (v ≤ w) & (w ≤ v), i.e., in view of Part 5.2, as: This enables us to identify degrees v ∈ [0, 1] in ordered-category terms -by identifying them with the functions f x,v and taking into account the above possibility to compare degrees at different elements a.
Hence, if we have an automorphism, degrees are mapped into degrees in a 1-1 and order-preserving way, i.e., indeed we have a monotonic bijection 6 • .To complete the proof, we need to show how, for each morphism f and for every two elements a and b, we can describe the value f (x, y) in ordered-category terms.This will complete the proof that the given automorphism has the form G π,H for the mappings π and H as identified in Sections 4 and 5 of this proof.
6.1 • .Let us first prove the following auxiliary result: Indeed, by definition of a composition, the composition c def = f • f x,1 has the following form: • c(x, y ) = f (x, y ) for all y and • c(x , y ) = 0 for all y and for all x = a.
As we have argued in Part 5 of this proof, the value of a composition function cannot exceed the maximum value of each of the composed morphisms.Thus, for the composition f y→x • c , the maximum value cannot exceed the maximum value f (x, y) of the morphism c .Thus, if Vice versa, for every v ≤ f (x, y), we can construct a morphism f y→x for which namely, we can take: • f y→x (y, x) = v, and • f y→x (x , y ) = 0 for all pairs (x , y ) = (y, x).
One can easily check that in this case indeed 6.2 • .For each morphism f and for every two elements a and b, we can identify the degree f (x, y) as the largest degree v for which the inequality v ≤ f (x, y) holds.
Since, according to Part 6.1 of this proof, the inequality v ≤ f (x, y) can be described in ordered-category terms, we can thus conclude that the degree f (x, y) can also be described in ordered-category terms.
The proposition is proven.
Conclusions
Many concepts of modern mathematics, starting from the basic notions of sets and functions, are described in terms of category theory.many other mathematical concepts can be reformulated in category terms.Due to the general nature of category theory, such a reformulation often helps to extend notions and results from one area to different areas of mathematics.
Because of this potential advantage, it is reasonable to ask whether similar fuzzy notions can also be described in category terms.In this paper, we show that fuzzy relations -i.e., fuzzy analogues of functions -can indeed be described in category terms.Specifically, we show that, in the corresponding fuzzy category, we can describe both: • elements of the original universe of discourse (modulo a 1-1 permutation), and • fuzzy degrees (modulo a 1-1 monotonic mapping from the interval [0, 1] onto itself).
This result shows the soundness of our axiomatic description of fuzzy sets in the language of categories.
At this moment, what we have is a very theoretical paper.However, we hope that, similarly to how the reformulation of crisp notions in category terms can help generalize the corresponding results, our reformulation will help extend fuzzy results to more general situations -and thus, will facilitate future applications.
y) for all x and y.Formulation of the problem.What can be defined based on this category-theory formulation?Can we uniquely determine the elements of the Universe of discourse U and the corresponding relations based on the categorical information?
Definition 3 .
An automorphism of an ordered category is a pair consisting of bijections F : Ob → Ob and G : Mor → Mor for which: • for all f , a, and b, we have f : a → b if and only if G( f ) : F(a) → F(b);
4 •
. One can see that two morphisms f and f of the type described in Part 3 are connected by the relation ≤ (i.e., f ≤ f or f ≤ f ) if and only if they correspond to the same element a ∈ U.Thus, we can describe elements of the set U in ordered-category terms: as equivalent classes of morphisms of the type described in Part 3 with respect to the relation | 4,524.8 | 2018-01-23T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
The efficacy of aerosol – cloud radiative perturbations from near-surface emissions in deep open-cell stratocumuli
Aerosol–cloud radiative effects are determined and quantified in simulations of deep open-cell stratocumuli observed during the VAMOS Ocean-Cloud-AtmosphereLand Study Regional Experiment (VOCALS-REx) campaign off the west coast of Chile. The cloud deck forms in a boundary layer 1.5 km deep, with cell sizes reaching 50 km in diameter. Global databases of ship tracks suggest that these linear structures are seldom found in boundary layers this deep. Here, we quantify the changes in cloud radiative properties to a continuous aerosol point source moving along a fixed emission line releasing 1017 particles per second. We show that a spatially coherent cloud perturbation is not evident along the emission line. Yet our model simulates an increase in domain-mean all-sky albedo of 0.05, corresponding to a diurnally averaged cloud radiative effect of 20 Wm−2, given the annual mean solar insolation at the VOCALS-REx site. Therefore, marked changes in cloud radiative properties in precipitating deep open cells may be driven by anthropogenic near-surface aerosol perturbations, such as those generated by ships. Furthermore, we demonstrate that these changes in cloud radiative properties are masked by the naturally occurring variability within the organised cloud field. A clear detection and attribution of cloud radiative effects to a perturbation in aerosol concentrations becomes possible when sub-filtering of the cloud field is applied, using the spatio-temporal distribution of the aerosol perturbation. Therefore, this work has implications for the detection and attribution of effective cloud radiative forcing in marine stratocumuli, which constitutes one of the major physical uncertainties within the climate system. Our results suggest that ships may sometimes have a substantial radiative effect on marine clouds and albedo, even when ship tracks are not readily visible.
Introduction
Aerosol-cloud interactions (aci) in low-level clouds, which span just over a fifth of the Earth's ocean surface (Wood, 2012), contribute the largest uncertainty to estimates of global mean effective radiative forcing (ERF) of anthropogenic aerosols (Myhre et al., 2013).Current estimates of ERF aci range from −1.2 W m −2 , which would constitute a strong global cooling that would partially offset the effects of warming due to anthropogenic greenhouse gas emissions, to 0.0 W m −2 , which would render these effects negligible at the global scale (Boucher et al., 2013).
To reduce this uncertainty substantially through the use of satellite retrievals and global climate models (GCMs) remains challenging.These challenges include issues of collocation in retrievals of aerosol and cloud properties from space (Koren et al., 2007;Charlson et al., 2007), and the inadequate representation of small-scale dynamical processes that contribute to the cloud response in coarse-scale models (Nam et al., 2012;Schneider et al., 2017).Valuable insights into the processes involved and plausible ranges of cloud radiative perturbations through aerosols have been obtained by the study of ship tracks.These anomalous cloud lines (Conover, 1966) are a phenomenon associated with a characteristic spatial structure, which occurs in low-level A. Possner et al.: Cloud-radiative perturbations in deep stratocumulus stratocumulus clouds.The changed cloud radiative properties within these tracks can be attributed to localised aerosol perturbations (Durkee et al., 2000a, c;Schreier et al., 2006).
Databases obtained from satellite retrievals (Coakley and Walsh, 2002;Christensen and Stephens, 2012;Chen et al., 2015), as well as high-resolution modelling studies (Wang and Feingold, 2009;Wang et al., 2011;Berner et al., 2015), show that the net cloud radiative effect (CRE) in individual ship tracks does not depend only on cloud droplet number increases and size decreases, which occur in almost all cases (Chen et al., 2012), but also on induced changes in cloud morphology, cloud fraction (CF) and liquid water path (LWP).Cloud albedo (A cld ) may not always increase with increased levels of pollution, but may also decrease (Christensen and Stephens, 2012;Berner et al., 2015).Furthermore, localised gradients in aerosol concentration have been shown to induce self-sustaining mesoscale circulations (Chen et al., 2015;Wang et al., 2011), by the local suppression of precipitation in the polluted cloud and the convergence of cold pools transporting moisture into the polluted cloud from the surrounding precipitating clouds.In the global mean, LWP increases of between 16 % and 24 % (depending on above-cloud moisture content) were found in ship tracks that formed within the precipitating cloud regime (Toll et al., 2017).
However, ship tracks are rare in comparison to the number of ocean-going ships that criss-cross the world's oceans (Schreier et al., 2007).Merely 1924 ship tracks were detected over 2 years worldwide (Campmany et al., 2009), while the total ocean-going fleet consists of over 50 000 ships that exceed 500 gross tons in weight (European Maritime Safety Agency, 2014).Understanding what constrains their occurrence in terms of background pollution, boundary layer dynamics and large-scale stability may facilitate constraining regimes and magnitudes of global effective radiative forcing estimates.In particular, studies of ship tracks in highresolution models and satellite retrievals have been mostly limited to extremely shallow boundary layers that range in depth from 300 to 600 m (Christensen and Stephens, 2012;Berner et al., 2015;Chen et al., 2015).Ship track formation within one slightly deeper boundary layer of 800 m was investigated in high-resolution simulations by Wang and Feingold (2009) and Wang et al. (2011).Indeed ship tracks are very rarely detected in satellite retrievals of boundary layers deeper than 800 m (Durkee et al., 2000b;Toll et al., 2017).Yet over 70 % of stratocumulus clouds are found in deeper boundary layers (Muhlbauer et al., 2014).
The potential for albedo changes is particularly high in the open-cell and disorganised stratocumulus regimes, which occur more frequently in the subtropics, than in the closed-cell regime (Muhlbauer et al., 2014).Both of these regimes are characterised by shallow convective cloud structures that detrain laterally at the cloud top.The detrained cloud sheets that span the regions between the convective structures are optically thin (cloud optical thickness τ < 3), are often associ-ated with low droplet number concentrations (N d ∼ 5 cm −3 ) and may contribute substantially to the overall cloud fraction (Wood et al., 2018).Thus, their albedo is highly susceptible to aerosol perturbations from the perspective of Platnick and Twomey's (1994) albedo susceptibility definition.Yet, the efficacy of aerosol-cloud radiative interactions within these detrained cloud segments remains unclear.In general, the processes governing aerosol-cloud interactions in deep stratocumulus boundary layers remain weakly constrained, with only few process-level studies (Wang et al., 2010;Kazil et al., 2011;Wood et al., 2011b;Zuidema et al., 2016) that quantify effects on cloud characteristics through aerosol pollution.
Within this study we quantify changes in cloud radiative properties due to aerosol perturbations in deep (boundary layer depth of ∼ 1.5 km) open-cell stratocumulus clouds and discuss dominant mechanisms that constrain the cloud albedo response.
Case description
This study is based on a well-documented case of opencell stratocumulus clouds embedded within a boundary layer ∼ 1.5 km deep, which was observed during research flight 6 of the VAMOS Ocean-Cloud-Atmosphere-Land Study Regional Experiment (VOCALS-REx) campaign.Detailed information on the particular case and measurement techniques can be found in Wood et al. (2011a) and Wood et al. (2011b) respectively.Here we give an overview of the two cloud regimes and their characteristics relevant to this study.
The cloud regime was sampled during the early morning hours (03:00 to 08:30 local time) on 28 October 2008.A summary of cloud properties measured during the campaign is given in Table 1.The characteristic cell size was found to be between 30 and 40 km, which is detected frequently in southeast Pacific stratocumulus clouds (Wood and Hartmann, 2006).A cloud fraction of 56 % was measured in the open-cell regime, which is consistent with the observed high level of detrained cloudy air masses, which spread from the updraft cores into the inner regions of the cell.Furthermore, a cloud cover of this extent is typical for marine open-cell stratocumuli (Muhlbauer et al., 2014;Terai et al., 2014).
The open-cell clouds coincided with moister sub-cloud layer air masses, as compared to the neighbouring closedcell regime, and were characterised by low sub-cloud layer aerosol concentrations (30 cm −3 ).A strong vertical gradient in aerosol concentration was observed within the open cells near the cloud base where concentrations decreased rapidly.A strong horizontal gradient in cloud-top droplet number concentration (N d_top ) was observed (Wood et al., 2011a) between the updraft cores (N d_top ∼ 30 cm −3 ) and the detrained cloud filaments (N d_top ∼ 1-10 cm −3 ).Substantial rates of Table 1.Spatio-temporal averages of liquid water path (LWP), surface precipitation (R sfc ), cloud base precipitation (R cb ), cloud fraction (CF), sub-cloud mean boundary layer aerosol concentration (N a_sub ) and cloud-top droplet number concentration (N d_top ) are presented.The first row containing data shows the observations of the open-cell stratocumulus deck obtained during research flight RF06 of the VOCALS-Rex campaign on 28 October 2008 between 08:00 and 13:30 UTC.The numerical results, shown in the last two rows, were averaged over the identical time periods over both simulated days.Domain-mean values were computed for LWP, CF and N a_sub .R sfc was averaged only over values exceeding 0.1 mm day −1 and R cb was averaged for all R cb > 0 mm day −1 (consistent with observations).N d_top was diagnosed at the highest model level where cloud water content exceeded 0.01 g m −3 and was averaged horizontally (cloudy points only).Numbers in brackets denote the interquartile range of each variable, which covers the spatial and temporal variability of the cloud field.Numerical results are shown for the control simulation (ctrl) and the aerosol perturbed simulation (ship).Further details on simulations can be obtained in the text.
Simulation set-up
Two simulations were performed using the Weather Research and Forecasting (WRF) model at the convection-resolving scale, with a horizontal grid resolution of 300×300 m 2 , a vertical resolution of 30 m and a time step of 3 s following Wang et al. (2010).The idealised simulations with periodic boundary conditions at the domain edges were initialised with meteorological profiles obtained during research flight 6 of the VOCALS-REx field campaign (Wang et al., 2011;Wood et al., 2011a).A brief overview of the research flight is given in Sect.2.1.Given the large characteristic spatial scales of the cellular organisation of the cloud field, with cell sizes ranging from 30 to 40 km, simulations were performed on a large domain of 180×180 km 2 .The domain was centred on 78 • W and 15 • S, which is off the west coast of Chile.The model top was specified at an altitude of 2 km, which is 600 m above the boundary layer top.Above this height a standard clearsky atmosphere profile is assumed for the computation of the radiative fluxes until the top of atmosphere.Both simulations were run for 48 h with a fixed divergence rate of 1.67 × 10 −6 s −1 , which was estimated from QuickSCAT surface winds (NASA, 2012) and prescribed surface fluxes.Surface latent heat and sensible heat fluxes were specified, according to field measurements, as 120 and 15 W m −2 (defined as positive upward) respectively.The surface pressure was specified as 1018 hPa.For simplicity, mean advective tendencies in the wind field were removed from the soundings.
The simulations were performed with the two-moment (Morrison et al., 2009) microphysics scheme, with a prognostic treatment of number and mass concentrations for cloud water and rain.The exponents of the cloud liquid water content and N d in the Khairoutdinov and Kogan (2000) autoconversion rate were adjusted to values obtained from the VOCALS-REx field campaign as 3.19 (cloud water exponent) and −1.49(N d exponent) respectively.These exponents were obtained for the VOCALS-REx field data using the approach described in Wood (2005).Precipitation formation was artificially suppressed in the first 2 h of simulation to facilitate a thermodynamic adjustment to the initialisation sounding before including moisture sinks.Cloud condensation nuclei (CCN) were treated as in Wang et al. (2011) with a prognostic log-normal sea salt mode centred at a mean diameter of 500 nm and variance of 1.5.Aerosols were advected according to grid-scale and subgrid-scale transport tendencies, and aerosol-cloud interactions were included by removing aerosols upon activation, which was treated as in Kravitz et al. (2014).The release of aerosol upon complete evaporation of cloud droplets and rain drops was also simulated.A surface sea salt emission flux of 20 m −2 h −1 was specified in line with estimates from previous simulations (Wang et al., 2010;Kazil et al., 2011).
In addition to the control simulation, from here on named ctrl simulation, an aerosol perturbation experiment was designed.The simulation, named ship simulation, followed the set-up of Wang et al. (2011) for direct comparison between the deep boundary layer case and shallow boundary layer case in terms of aerosol-cloud radiative perturbations.A ship moving at 5 m s −1 through the centre of the domain was allowed to continuously emit sea salt at a rate of 10 17 s −1 and a mean dry radius of 300 nm.This flux was chosen to match emissions within a previously studied case within a shallow open-cell regime (Wang et al., 2011) (see Sect. 3.2.1 for an in-depth discussion).Furthermore, these emissions were consistent with estimates proposed by Salter et al. (2008) for marine cloud brightening applications.The Community Atmosphere Model (CAM) radiation scheme was used in the simulations and A cld was estimated as A cld = τ/(τ + 6.8), where τ denotes the cloud optical depth, which in turn was where ρ w is 997.0 kg m −3 , q l is liquid water content, R eff is effective cloud droplet radius and z is height.The cloud base precipitation rate (R cb ) was computed as the mean precipitation flux across the lowest third of the cloud vertical extent, which is consistent with its estimation from observations.
Evaluation of open-cell characteristics
In order to assess the radiative effect of concentrated and localised aerosol pollution on deep open-cell clouds, the simulations need to demonstrate sufficient skill in capturing the characteristics and dynamics of the open-cell regime.Following initialisation, an unorganised stratiform cloud deck formed in the ctrl simulation.Initial organised structures appeared 6 h after initialisation following the onset of precipitation (Fig. 1).Following the second night, observed length scales of organisation (see Sect. 2.1) were simulated.
The diurnal evolution of LWP and N d_top is shown in Fig. 1.Periods when R cb exceeds 3 mm day −1 are marked in white.The simulation showed a pronounced diurnal cycle during both days in LWP and R cb .As in Wang et al. (2010), solar heating was found to break up the cell walls, which led to a reduction of LWP in the upper percentiles, a reduction in cloud-base precipitation rates to R cb < 2 mm day −1 and consequently the loss of cloud-field organisation in the late afternoon.During the night the cloud deck recovered and organisation was re-established.
The ctrl simulation was characterised by a well-mixed cloud layer and stably stratified sub-cloud layer (see Fig. S1 in the Supplement), which is characteristic of deep boundary layers.This structure developed rapidly following initialisation from the well-mixed state.Within the first 3.5 h the boundary layer deepened by 180 m before stabilising at 1.5 km, and the sub-cloud layer became stratified.A mean R cb of 9.1 mm day −1 (Table 1) was simulated in the early morning hours of the VOCALS-REx field campaign.
Although simulated mean R cb was within the spread of observed precipitation rates (Fig. S2), it was roughly twice as high as the mean R cb rates inferred from observations (Table 1).Meanwhile, the mean LWP was underestimated by a factor of 2 in the open-cell regime, which is consistent with an overestimation in precipitation.However, the simulated cumulative precipitation distribution shown in the Supplement (Fig. S2a) showed that the overall distribution of R cb was well captured in the ctrl simulation and that the bias in the mean originates from the slight overestimation of intense precipitation events exceeding 20 mm day −1 .These events are likely to be found within the walls of the open cells, which are characterised by strong updrafts (Fig. S2b).
The microphysical quantities, such as the mean sub-cloud layer aerosol concentration (N a_sub = 34 cm −3 ) and N d_top = 8 cm −3 , were in good agreement with the observations.In the simulations the aerosol particles are lifted into the cloud layer within the cell walls, where they activated and N d was relatively high.Cloud filaments, many of which are quite optically thin, were detrained horizontally, and are characterised by low N d_top (Fig. 2a) due to the efficient removal through precipitation.While cell-wall N d may reach up to 40 cm −3 in the ctrl simulation (Fig. 2b), characteristic N d in detrained cloud filaments, sometimes referred to as "veil clouds" (Wood et al., 2018), were as low as 2-3 cm −3 .The efficient removal of aerosol particles through cloud processing combined with the stable stratification in the sub-cloud layer induced strong vertical gradients in the combined particle number concentration N tot defined as N tot = N a + N d , where N a denotes the aerosol number concentration.Subcloud layer N tot ranged between 30 and 60 cm −3 , while values below 10 cm −3 above the cloud base height were simulated frequently (Fig. 2c).
In summary, despite remaining biases in the mean LWP and R cb , the simulation overall captured a realistic evolution of the open-cell cloud deck with a pronounced diurnal cycle.Since the overall cloud-cell statistics (Table 1) and the horizontal cloud cover are consistent with observations, it gives us confidence that the underlying cloud dynamics were captured in the ctrl simulation.Regions of detrained cloud spanned 36 % of the domain, and were characterised by low in-cloud LWP and N d_top , which makes them particularly susceptible to aerosol-induced cloud radiative perturbations.Yet any near-surface source of pollution will predominantly be transported into the cloud layer through the cell walls given the pronounced vertical stratification in the sub-cloud layer, where wet aerosol removal processes are efficient.It therefore remains to be seen whether substantial changes in cloud radiative properties can be induced by nearsurface aerosol perturbations.
Efficacy of aerosol perturbation
The sea salt perturbed simulation displayed a spatially constrained aerosol plume meandering around the emission line (Fig. 3a).The highest values of N a_sub exceeding 1000 cm −3 were found within a narrow plume extending in length up to 60 km behind the point source.Overall, the aerosol perturbation remained spatially constrained within the boundary layer in a region of ±30 km around the emission line.This strip of the domain (spanning 60-120 km in the y direction) is characterised by increased levels of N a_sub and from here on will be referred to as "seeded", whereas the domain outside this region will be referred to as "unseeded".
Inside the seeded region the emitted aerosol was predominantly transported into the cloud within the updrafts of the cell walls (Fig. S3).Despite efficient wet removal processes within the cell walls, the largest absolute changes in N d were as large as 600 cm −3 .At the cloud top, increases in N d_top of up to 150 cm −3 were found (Fig. 3b).From the cell walls, 1 but for mean values of the last 24 h period.The following additional variables were added to the table: cloud albedo (A cld ), all-sky albedo A all = CF • A cld + (1 − CF) • A clr and cloud-top effective cloud droplet radius (R eff_top ).A clr denotes the clear-sky albedo, which was determined as A clr = 0.06 in both simulations.R eff_top was diagnosed similarly to N d_top and averaged over cloudy regions only.For A all and A cld domain averages (i.e.including clear-sky and cloudy sky) are given.All entries for "wall" (cloud with updraft > 0.5 m s −1 ) and detrained (non-wall cloud) regions, denote in-cloud averages only.CF for wall and detrained cloud denotes the domain area fraction covered by each category.Inside cell walls and cloud filaments CF is 100 %.The seeded region is defined as ±30 km around the emission line and the remainder of the domain is classified as unseeded.
Second day mean
Sim/obs LWP (g m −2 ) R sfc (mm day −1 ) R cb (mm day −1 ) the increased levels of N d persisted to the detrained cloud regions (Fig. S3), where the largest relative increases in N d_top were found.On average N d_top increased by 177 % within the cell walls and by 213 % within the stratified detrained cloud (Table 2).In this analysis cell walls were diagnosed as cloudcovered regions with updraft speeds exceeding 0.5 m s −1 .All remaining, non-wall cloudy grid points were classified as detrained cloud.The largest decreases in the cloud droplet effective radius at the cloud top (R eff_top ) were found to coincide with regions of large increases in N d_top (Fig. 3c and b respectively).R eff_top may be reduced by up to 10 µm locally.The largest decreases in R eff_top were found within the vicinity of strong updrafts.Here, many aerosols were carried into the cloud layer and were activated.Efficient in-cloud scavenging led to a reduction in N d_top and an increase in R eff_top going radially outward from the centre of the updraft cores.Averaged over the seeded domain, a mean reduction in R eff_top between 3 and 4 µm was simulated in the cell walls and detrained cloud regions (Table 2).
The changes in cloud-microphysical properties led to an increase in domain-averaged LWP (Table 2) within the seeded (36 %) and unseeded regions (5 %).Yet lower meanvalues of in-cloud LWP were found within the detrained cloud and cell-wall regimes in both domains of the ship simulation (Table 2).R cb was found to decrease by 0.7 mm day −1 , while R sfc remained largely unaffected by the aerosol perturbation (Table 2).
Due to the reduction in R cb , more cloud water was retained within the updraft and detrained horizontally into the stratified cloud filaments, which penetrated deeper into the open cells.The increase in areal extent of the detrained cloud sheets was accompanied by a shift (Fig. S5) in in-cloud LWP distribution towards lower LWP between 50 and 150 g m −2 .Therefore, the increase in domain-mean LWP, despite the decrease in in-cloud LWP, was attributed to the 14 % increase in cloud fraction from 44 % in the ctrl simulation to 58 % in the ship simulation (Table 2).Yet the open cells remain partially uncovered, which prevents a potential transition from the open-cell state to the closed-cell regime.
Mean A cld is increased by 0.15 from 0.18 in the ctrl simulation to 0.33 inside the seeded region of the ship simulation (Table 2).This translates to a change in all-sky albedo (A all ) of 0.11 inside the seeded region, which corresponds to a shortwave cloud radiative effect (SW CRE) of 44 W m −2 at an annual mean solar insolation of 404 W m −2 at the VOCALS-REx field site.Although the strongest increase in A all was confined to the seeded domain, A all was found to be increased throughout the simulation domain (Fig. S6).Averaged over the entire domain A all increased by 0.05, which is equivalent to a SW CRE of 20 W m −2 exerted over an area of 180 × 180 km 2 .
The changes in domain-mean A all were attributed to albedo changes of the detrained cloud sheets spanning the domain between the cell walls.Both the areal coverage and reflectivity of the detrained cloud sheets increased in the ship simulation as compared to the ctrl simulation (Table 2).Meanwhile, the cell-wall albedo of 0.6 remained unaffected by the aerosol perturbation.Furthermore, we attributed the simulated change in A all predominantly to adjustments in macrophysical cloud properties of the detrained cloud regions.Changes in cloud microphysical properties and the associated Twomey (1991) effect were found to be of secondary importance to the change in all-sky albedo.
The increase in cloud fraction alone, while assuming no further changes in in-cloud A cld (i.e.assuming A cld as in the ctrl detrained and cell-wall regions in Table 2 and multiplying these values by the areal coverage of wall and de- 2).
Figure 4 shows the normalised occurrence rate (F ) within the detrained cloud regions.F , and the change in F due to the aerosol perturbation, is shown for each bin within the LWP-A cld phase space (Fig. 4a and b respectively).The behaviour of F within the N d_top -A cld space, which was sub-filtered to only include points where in-cloud LWP ranged between 40 and 60 g m −2 , is shown in Fig. 4c and d.The behaviour of F for other LWP sub-ranges was found to be qualitatively similar (Fig. S7).
The increased occurrence of moderate LWP values (50 ≤ LWP < 150 g m −2 ) may locally coincide with an increase in A cld (Fig. 4b).Yet the overall decrease in in-cloud LWP by 5 %-23 % (Table 2) implied that the increase in A cld within the cloud filaments could not be attributed to LWP adjustments.If anything, A cld would be expected to decrease given the reduction in in-cloud LWP.Meanwhile, Fig. 4d displays a clear shift in F towards higher N d_top associated with locally increased A cld .Hence, the increase in in-cloud A cld was attributed to the Twomey (1991) effect within the stratified cloud.
Contrasting the cloud response in deep and shallow open cells
Although the areal coverage of the detrained cloud amount between the cell walls of the open cells increased, which contributed to the brightening of the cloud deck, the highly concentrated aerosol perturbation was insufficient to induce a transition from open to closed cells in these simulations.Aerosols may impact this transition via aerosol-precipitation interactions.Decreases in N a from 90 to 10 cm −3 facilitated a rapid transition from the closed-cell to the open-cell state (Feingold et al., 2015) in previous simulations within the 800 m deep boundary layer observed during DYCOMS-II.Yet, the reverse transition from the open-cell state to the closed-cell state occurred over far longer timescales, if at all (Wang and Feingold, 2009;Feingold et al., 2015).Nonetheless, strongly concentrated sea salt emissions of 10 17 particles s −1 within the same boundary layer induced a transition from the open-cell to a filled-in cloud-cell state along the seeding line.Along the seeding line a secondary circulation maintained the cloud layer within the track while depleting the surrounding cloud (Wang et al., 2011).Such transitions from open cells to a closed-cell state along ship tracks have also been observed using remote sensing (Goren and Rosenfeld, 2012).
While a ship track formed in the shallow boundary layer with open cells between 10 and 15 km in size (Fig. 5a), a well-defined track does not form in the deep boundary layer, with characteristic cloud cell sizes of 30-40 km (Fig. 5b).A ship track is also not detected in N d_top or R reff_top (Fig. 3b and c, respectively).The absence of a track in the deep boundary layer is largely attributed to (i) the large spatial scales of variability within the background cloud state which is determined by the cloud dynamics and cloud-field organisation, and (ii) the incomplete filling in of the detrained cloud amount between cell walls, which prevents the transition to a 100 % cloud-covered state.
A change in A cld of 0.15, which is of the same magnitude as previously identified in ship tracks (Christensen and Stephens, 2011;Goren and Rosenfeld, 2012;Wang et al., 2011), is found embedded in deep open cells of the ship simulation.N d_top was increased by 167 % and R eff_top decreased by 14 % (Table 2).Yet, these effects remain seemingly hidden in the large variability of the cloud properties governed by the dynamics of the cloud cells.Furthermore, these effects may not easily be attributed to aerosol perturbations via remote sensing techniques of cloud properties as most of the changes in local cloud properties remain within the variability of the system.
Knowing the position and extent of the aerosol perturbation allows one to remove a sufficient amount of variability within the ship simulation to obtain a spatially constrained, detectable and attributable response within the cloud properties.As one averages along the spatial dimension of the aerosol perturbation (coinciding with the x direction of the simulation domain) the pronounced shift in cloud properties between the seeded and unseeded regions of the ship simulation (Fig. 6) is highlighted.
However, while changes in total albedo induced within the seeded region may be identified in this manner, the change in total albedo of 0.03 (Table 2) within the detrained cloud regions of the surrounding unseeded domain would still not be accounted for.Furthermore, changes in A cld within the detrained cloud sheets were found up to 60 km from the emission line, which has implications for the definition of the truly unperturbed albedo within satellite retrievals of such scenes.
Implications for aerosol radiative forcing estimates in marine stratocumuli
Estimating the aerosol-induced radiative forcing in low-level marine clouds constitutes a considerable uncertainty in the overall cloud radiative forcing of anthropogenic aerosols.Satellite-based estimates of CRE changes due to ship exhaust have remained inconclusive due to the high degree of variability within the natural cloud scene (Peters et al., 2011).GCM estimates provide a wide range of CRE changes between −0.6 and −0.07 W m −2 (Lauer et al., 2007;Righi et al., 2011;Peters et al., 2012;Partanen et al., 2013) due to open-ocean shipping.Furthermore, it remains unclear whether GCMs represent the relevant scales of variability to provide reliable CRE estimates.The analysis of global datasets of ship tracks (Chen et al., 2015) and volcano plumes (Toll et al., 2017), which have been used as analogues to study the cloud response to anthropogenic emissions, have shown that in the global mean, the cloud response within the tracks largely follows the brightening expected by Twomey (1991).In the global mean, increases and decreases in LWP within the different cloud regimes seem to offset one another, while many GCMs predict a positive LWP response only (Wang et al., 2012;Ghan et al., 2016;Malavelle et al., 2017;Toll et al., 2017).
In this study we demonstrate that non-negligible amounts of brightening due to anthropogenic shipping emissions may persist in the absence of a clear ship track.In deep open cells, perturbations in A cld were found to be as large as 0.15 in regions where N a_sub is high and as large as 0.08 when integrated over the whole simulation domain of 180 × 180 km 2 .Furthermore, the induced brightening, which is almost as high as in simulations displaying a pronounced ship track ( A cld = 0.1 in Wang et al., 2011) remains obscured by the variability of the unpolluted cloud, where LWP and N d_top in itself may differ by an order of magnitude between convective cell walls and stratified regions of detrained cloud (Fig. 2).Furthermore, while these simulations are highly idealised in their set-up, they do not necessarily reflect unrealistic emission conditions.The prescribed ship is assumed to travel periodically along an identical emission line without any crosswind, which may alter the plume size or dilute emissions more effectively.Within the 48 h simulation, a total of five ships traverse the 180 × 180 km 2 domain repeatedly at a constant sailing speed of 5 m s −1 , and the cloud response to their combined emissions is assessed.Throughout most of the North Pacific a shipping density of around 30 ships per 100 km 2 per year is observed (MarineTraffic, 2018).Assuming a speed of 5 m s −1 (or even 10 m s −1 ), such a density corresponds to an estimated number of 116 (58) ships within the simulation domain on average.Within the North Atlantic, the higher density of ships could even correspond to over 200 (400) ships within a 180 × 180 km 2 domain (Marine-Traffic, 2018).Therefore, our emission scenario is equivalent to merely 1 %-9 % of these ships contributing to increased CCN concentrations within the seeded domain.
Increases in cloud-scene albedo were attributed to changes in brightness within the stratified, detrained cloud regions covering the boundary layer between convective cell walls.These detrained cloud regions are optically thin (τ = 2.8) and are often referred to as veil clouds.They are connected to the sub-cloud layer aerosol through the convective cell walls feeding into the detrained cloud regions.In our simulations, these detrained cloud filaments contributed 82 % to the overall cloud fraction.In summary, our results suggest that although detectable ship tracks are extremely rare in deep boundary layers, an increase in A cld of the order of 0.1 may persist in deeper boundary layers of open-cell stratocumuli.Furthermore, our simulations suggest that the albedo increase within this regime, which is currently not picked up in ship track analyses, could be driven predominantly by increases in cloud fraction, as opposed to the Twomey (1991) effect.
While these simulations are limited in their generality, they do demonstrate that substantial changes in A cld may occur in optically thin veil clouds and that the aerosol-induced changes in cloud scene albedo may prove extremely difficult to attribute without knowing the spatio-temporal distribution of the aerosol perturbation.Despite significant changes in cloud-scene albedo, an attribution of these changes to an aerosol perturbation using satellite retrievals of cloudproperties and vertically integrated aerosol metrics alone could prove to be extremely difficult in this cloud regime (Fig. 3).Yet such open-cell cloud scenes with a substantial cloud fraction and a high percentage of veil clouds occur often (McCoy et al., 2017) and occur in regions of high solar insolation.Therefore, aerosol-induced cloud radiative perturbations within these clouds may be relevant to global estimates of aerosol-cloud radiative forcing.
Our results strongly motivate further research into the efficacy of aerosol perturbations in deep open-cell stratocumulus.Here we demonstrate that the aerosol forcing in this regime could be substantial.Yet for a clear assessment the occurrence rate and magnitude of A cld changes in stratified detrained cloud remnants need to be known.One approach to constrain these aerosol-induced perturbations could be field measurements around known aerosol perturbations.Such measurements would allow the detection and attribution of cloud radiative effects to aerosol perturbations.
Conclusions
The analysis of ship tracks and changes in cloud radiative properties within them has arguably provided an extremely useful framework to develop a mechanistic understanding of aerosol-cloud radiative interactions and to constrain the effective cloud radiative forcing within marine low-level clouds.However, linear shaped tracks are extremely rare and tend to form in shallow boundary layers with a top below 800 m (Durkee et al., 2000b;Christensen and Stephens, 2012;Chen et al., 2015;Toll et al., 2017).
At least 70 % of marine stratocumuli form in deeper boundary layers, where distinct ship tracks due to ship emissions are very rarely detected.Furthermore, 73 % of all stratocumuli globally are likely to occur within the open-cell or disorganised regime (Muhlbauer et al., 2014).Here, we assessed in idealised cloud-resolving simulations whether significant cloud radiative perturbations persist in a field of deep (boundary layer top at 1.5 km) open-cell stratocumulus, which was observed during RF06 of the VOCALS-REx campaign.Our key findings are summarised as follows: at 5 m s −1 , which released particles of 300 nm in size at a rate of 10 17 particles s −1 ).This translates to a change in the SW CRE of 20 W m −2 , for an annual solar mean insolation of 404 W m −2 at this site.2. Regional changes in A cld (increase by 0.15) and cloud microphysical (167 % increase in N d_top and 14 % de-crease in R eff_top ) and macrophysical properties (14 % absolute increase in CF and a 5-23 % decrease in incloud LWP) within the seeded domain (±30 km around the emission line) remain obscured by the naturally occurring variability of cloud field.Reducing the variability of the clouds by averaging along the spatial extent of the aerosol perturbation permitted the detection and attribution of these cloud radiative effects to the aerosol perturbation.Hence, knowledge of the spatio-temporal distribution of the aerosol perturbation was found to be necessary for the remote attribution of aerosol effects on cloud radiative properties within this regime.
3. The simulated cloud brightening was attributed to the brightening of the detrained cloud filaments that spanned the regions between the convective cell walls of the open cells.These so-called veil clouds occur frequently in low-level cloud layers and are connected to sub-cloud aerosol sources through the convective cloud cores within the cell walls.Within these clouds the brightening was largely attributed to increases in cloud fraction, with a secondary contribution to brightening due to changes in cloud microphysical properties.Competing interests.The authors declare that they have no conflict of interest.
Figure 1 .
Figure 1.(a) Time series of domain-mean liquid water path, LWP (green), and cloud-top droplet number concentration, N d_top (red), for the ctrl simulation.Shading (blue for LWP, black for N d_top ) denotes interdecile percentile range.Snapshots of LWP are shown in (b) after initial organised structures developed, (c) after the solar maximum and (d) for the second day organised state.
Figure 2 .
Figure 2. (a) Instantaneous cloud-top droplet number concentration, N d_top (corresponding snapshot to LWP field shown in Fig.1d).The black line denotes the location of the cross section of (b) cloud droplet number concentration (N d ) and (c) total number concentration (N tot = N a + N d , where N a denotes the aerosol number concentration).Cloud top is defined as in Table1.
Figure 3 .
Figure 3. Snapshots of (a) vertically averaged sub-cloud layer aerosol concentration (N a_sub ), (b) cloud-top cloud droplet number concentration (N d_top ), (c) cloud-top mean effective cloud droplet radius (R eff_top ), (d) cloud albedo (A cld ) and (e) liquid water path (LWP).Instantaneous fields are shown at 12:00 UTC for the ship simulation.Fields for the ctrl simulation are shown in the Supplement (Fig. S4).
Figure 4 .
Figure 4. Occurrence rate F (%) for the (a) liquid water path (LWP) versus cloud albedo (A cld ) phase space and (c) the cloud-top droplet number concentration (N d_top ) versus A cld phase space.The N d_top -A cld space was sub-filtered for LWP within the range of 40-60 g m −2 .Results are shown in (a) and (c) for the last 24 h of the ctrl simulation.Absolute changes in F for the ship simulation with respect to the ctrl simulation are shown in (b) and (d) respectively.F is normalised to 100 % across the shown phase space.The bin widths for each of which F is defined are LWP: 7 g m −2 , N d_top : 1 cm −3 and A cld : 0.01.
Figure 5 .
Figure 5. (a) Cloud albedo field from Fig. 1a in Wang et al. (2011) for an 800 m deep boundary layer.(b) Cloud albedo at 12:00 UTC on the second day of the ship simulation for a 1.5 km deep boundary layer.Both simulations were subject to an equal seeding source of 10 17 particles s −1 .
1.
Albedo changes equivalent to albedo increases in previously observed ship tracks within shallow open-cell stratocumuli were embedded within a stratocumulus deck of deep open cells, despite the absence of a spatially coherent structure such as a ship track.The domain-mean all-sky albedo increased by 0.05 due to a prescribed seeding source (sea salt emission moving
Figure 6 .
Figure 6.Hovmoeller diagrams of (a) sub-cloud layer mean aerosol concentration (N a_sub ), (b) cloud-top cloud droplet number concentration (N d_top ), (c) cloud-top mean effective cloud droplet radius (R eff_top ), (d) cloud albedo (A cld ) and (e) liquid water path (LWP).Spatial averages were obtained along the emission line dimension (coinciding with x dimension of simulation domain).Hovmoeller diagrams for the ctrl simulation are shown in the Supplement (Fig. S8).
Supplement.
The supplement related to this article is available online at: https://doi.org/10.5194/acp-18-17475-2018-supplement.Author contributions.AP, HW, KC, RW and TPA designed the research.AP performed the research.HW contributed code adjustments to WRF.AP analyzed data.AP, HW, KC, RW and TPA wrote the paper.
Table 2 .
Same as Table December 2018, and are open acces.The WRF code is open access with public documentation. | 9,268 | 2018-12-11T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Stress redistribution in individual ultrathin strained silicon nanowires: a high-resolution polarized Raman study
Strain nano-engineering provides valuable opportunities to create high-performance nanodevices by a precise tailoring of semiconductor band structure. Achieving these enhanced capabilities has sparked a surge of interest in controlling strain on the nanoscale. In this work, the stress behavior in ultrathin strained silicon nanowires directly on oxide is elucidated using background-free, high-resolution polarized Raman spectroscopy. We established a theoretical framework to quantify the stress from Raman shifts taking into account the anisotropy associated with the nanowire quasi-one-dimensional morphology. The investigated nanowires have lateral dimensions of 30, 50 and 80 nm and a length of 1 μm top-down fabricated by patterning and etching 15 nm thick biaxially tensile strained silicon nanomembranes generated using heteroepitaxy and ultrathin layer transfer. The concern over the contribution of Raman scattering at the nanowire 〈110〉 oriented sidewalls is circumvented by precisely selecting the incident polarization relative to the sidewalls of the nanowire, thus enabling an accurate and rigorous analysis of stress profiles in individual nanowires. Unlike suspended nanowires, which become uniaxially strained as a result of free surface-induced relaxation, we demonstrated that stress profiles in single nanowires are rather complex and non-uniform along different directions due to the oxide–nanowire interface. As a general trend, higher stresses are observed at the center of the nanowire and found to decrease linearly as a function of the nanowire width. Using multi-wavelength high-resolution Raman spectroscopy, we also extracted the stress profiles at different depths in the nanowire. The residual stress in the top ∼10 nm of the nanowire was found to be nearly uniaxial and increase from the edge toward the center, which remains highly strained. In contrast, the average stress profiles measured over the whole nanowire thickness exhibit different behavior characterized by a plateau in the region ∼200 nm away from the edges. Our observations indicate that the lattice near the newly formed free surface moves inwards and drags the underlying substrate leading to a complex redistribution of stress. This nanoscale patterning-induced relaxation has direct implications for electrical and mechanical properties of strained silicon nanowires and provides myriad opportunities to create entirely new strained-engineered nanoscale devices.
and ultrathin layer transfer. The concern over the contribution of Raman scattering at the nanowire 110 oriented sidewalls is circumvented by precisely selecting the incident polarization relative to the sidewalls of the nanowire, thus enabling an accurate and rigorous analysis of stress profiles in individual nanowires. Unlike suspended nanowires, which become uniaxially strained as a result of free surface-induced relaxation, we demonstrated that stress profiles in single nanowires are rather complex and non-uniform along different directions due to the oxide-nanowire interface. As a general trend, higher stresses are observed at the center of the nanowire and found to decrease linearly as a function of the nanowire width. Using multi-wavelength high-resolution Raman spectroscopy, we also extracted the stress profiles at different depths in the nanowire. The residual stress in the top ∼10 nm of the nanowire was found to be nearly uniaxial and increase from the edge toward the center, which remains highly strained. In contrast, the average stress profiles measured over the whole nanowire thickness exhibit different behavior characterized by a plateau in the region ∼200 nm away from the edges. Our observations indicate that the lattice near the newly formed free surface moves inwards and drags the underlying substrate leading to a complex redistribution of stress. This
Introduction
Silicon (Si) nanowires are quasi-one-dimensional nanostructures providing a wealth of opportunities to implement a variety of nanoscale technologies [1][2][3][4][5]. Particularly, nanowire-based architectures have been introduced to extend the lifetime of Si traditional electronics by achieving lower power consumption, higher performance and scalable devices to respond to the relentless course toward the miniaturization imposed by Moore's law [6]. In this perspective, to overcome the limitations faced in the two-dimensional planar devices, transistors with nanowire-like channels are currently used in the fabrication of high-performance microprocessors and memory devices for wireless systems such as mobile phones and radars [6]. Unlike freestanding nanowire-based devices [7], these new technologies employ nanowires having three free facets while the fourth one forms an interface with the underlying substrate [5,6].
Both of these categories of nanowires suffer, however, from degradation in the carrier velocity as a result of quantum confinement and scattering at the sidewalls, which hinders the device performance [8,9]. Stress nano-engineering has emerged as a powerful stratagem to alleviate these limitations and extend the capabilities of nanowire-based electronics [7].
Strain engineering has been used in the microelectronics industry since the 90 nm technology node. The main approach currently employed consists of depositing stressor layers on top of transistors. However, as the dimensions of nanowire-based transistors are becoming smaller than the required thickness of such an overlayer, it is difficult to implement this approach into smaller nodes or other nanowire-based architectures. Recently, it was demonstrated that ultrathin, globally strained silicon layers (nanomembranes) are the material of choice to generate strained Si nanowires using top-down nanofabrication processes [7,10,11]. Interestingly, upon nanoscale patterning of biaxial strained nanomembranes-a crucial step in the fabrication of strained nanowires-the formation of free surfaces induces a local relaxation of strain due to rearrangement of lattice atoms near the newly formed edges [10,[12][13][14][15][16]. In general, the extent of this phenomenon depends on dimension and geometry [12][13][14][15][16]. However, it is widely admitted that due to nanowire geometry (i.e. the high aspect ratio) the post-patterning stress becomes uniaxial, which means that the stress is fully relaxed along the width of the nanowire [7,11]. However, as demonstrated in this work, this is not the case for nanowires fabricated directly on oxide. More precisely, the nanowire-substrate interface makes the strain redistribution rather complex as one of the four facets is stabilized by the underlying oxide layer. Exploring and understanding this subtle but important phenomenon is crucial for accurate strain nanoengineering and a precise prediction of the performance of strained Si nanowire-based devices.
In order to accurately probe the evolution of strain in nanoscale, we have developed a method of high-resolution polarized Raman spectroscopy that allows the analysis of the individual contributions of longitudinal-optical (LO) and transverse-optical (TO) phonons and their profiles in a single nanowire [10]. However, the use of a single excitation wavelength (532 nm) in that work can only permit the measurement of phonon profiles averaged over the whole thickness of a nanowire. Moreover, while simultaneous detection of LO and TO phonon modes can directly characterize anisotropic stress relaxations at each position, TO phonon detection is still challenging due to its strong sensitivity to the laser polarization relative to sample geometry and thus the allowed LO phonon is always mixed. Herein, to circumvent these limitations, we employ two excitation wavelengths with linear polarizations corresponding to different penetration depths in silicon, thereby achieving LO phonon detection as a local stress probe at different depths within a single nanowire. We also discuss and take into consideration the effects of the scattering at the nanowire sidewalls on the obtained Raman data and present a method for an accurate determination of stress in nanoscale structures. The effect of the nanowire lateral dimension on the stress relaxation is also addressed.
Sample preparation
The Si nanowires were generated by patterning and etching a 15 nm thick biaxially tensilestrained Si nanomembrane by combining electron beam lithography and dry reactive ion etching. The strained nanomembrane was synthesized by growing epitaxially an ultrathin layer of Si on a ∼500 nm thick Si 0.84 Ge 0.16 relaxed buffer layer grown on Si (001) substrate using reduced pressure chemical vapor deposition. The obtained strained-Si/Si 0.84 Ge 0.16 /Si heterostructure was subsequently capped by an SiO 2 layer deposited by plasma-enhanced chemical vapor deposition. A second substrate was prepared by deposition of a ∼120 nm thick Ge layer on Si (001) using solid-source molecular beam epitaxy. This wafer was also covered by a ∼200 nm thick SiO 2 layer. The ultrathin strained Si layer was transferred from the first wafer onto the second one using direct wafer bonding followed by ion-induced slicing and selective chemical etching of SiGe leading to strained Si/SiO 2 /Ge/Si heterostructure. The intermediate Ge layer was introduced to prevent the excitation laser from reaching the Si handle substrate in the case of excitation by the laser in the visible range or when the region between nanowires is exposed to the laser. Thus, the background from the underlying Si substrate is eliminated and only the Raman signal from the strained nanowire is detected, thereby achieving a higher sensitivity to local phonons, which is critical for a precise analysis of stress. Ordered arrays of nanowires having a fixed length of 1000 nm and widths of 30, 50 or 80 nm were fabricated. The nanowire arrays were patterned on a negative resist using electron-beam lithography. Reactive ion etching was applied to transfer the pattern to the strained layer. The nanowires are aligned along the 110 direction and separated by 500 nm from each other to allow the exposure of a single nanowire to the laser beam during Raman analysis. The 110 nanowires were investigated because of their practical importance as high-performance channels for field effect transistors. The etching was performed at −60 • C using a mixture of SF 6 (100 sccm) and O 2 (5 sccm) with a relatively low power of 40 W. These conditions were optimized to eliminate possible damage during the etching process. Here the chemical reactivity is dominant. High-resolution transmission electron microscopy investigations (not shown here) confirmed the absence of any damage near the newly formed edges. Figure 1 displays typical atomic force microscopy (top) and scanning electron microscopy (bottom) images of arrays of nanowires having a width of 80 and 30 nm, respectively. The figure also depicts the sidewalls orientation of the nanowires relative to the polarization of the excitation laser. The long axis of the nanowire is aligned along the 110 direction. The crystal and the sample coordinate are symbolized as x,y,z and x , y , z , respectively. Porto notation, z(x x )z or z(y y )z, is used to represent the polarization conditions. The laser beam (λ = 442 or 355 nm) passes through a linear polarizer and a half-wave plate. The linear polarizer sets the incoming polarization from the laser. The half-wave plate is used to align the incident polarized beam parallel (x ) or perpendicular (y ) to the nanowire long axis. The beam is then expanded and focused onto the nanowires using an oil-immersion objective lens (numerical aperture = 1.4, ×125). The surface of the strained Si is directly immersed in oil. This configuration improves the laser focusing and minimizes index mismatch-induced spherical aberrations. According to the Rayleigh criterion, the diameter of the laser spot is λ/2. The backscattered Raman signal is collected by the same objective lens and passes through a pinhole (φ = 50 µm), which leads to an edge filter to block the strong Rayleigh signal in order for the backscattered Raman signal to be detected. The backscattered light goes through an analyzer before reaching a spectrometer (grating = 1800 g mm −1 , slit width = 50 µm) equipped with a thermo-electronically cooled charge coupled device (CCD) camera. A single Lorentz function was fitted to the Si Raman peak for all spectra to accurately determine its frequency, width and intensity. The system spectral resolution is 0.02 nm and the accuracy of our analysis after Lorentzian fitting is ∼0.02 cm −1 . The samples with nanowire arrays were mounted on an x -y translation stage and were scanned with a 25 nm step while exposed to the focused laser beam. At each step, a Raman spectrum was recorded. To prevent laser heating, Raman spectra were acquired at 1.0 mW power with a 5 s exposure time using the 442 nm lasers. An exposure of 5 min and a power of 5 µW were used for the 355 nm laser. Note that due to the dependence of the Si absorption coefficient on photon frequency [17], the laser penetration depth (= λ/4π k) is calculated to be ∼10 nm at 355 nm and ∼168 nm at 442 nm where λ and k are the wavelength and the imaginary part of the refractive index, respectively. The latter permits probing the stress profiles for the total thickness of the nanowire, whereas the shallower penetration allows the analysis of the top 10 nm of the nanowire.
Analytical methods for stress calculation in nanowires
The local stress in the Si lattice is obtained from Raman shifts by solving the well-known secular equation. In the case of the unpatterned nanomembrane, the equi-biaxial (σ x x = σ yy = σ 0 ) model of stress-induced Raman shift for doublet ( ω x y : transverse optical TO phonons) and singlet ( ω z : longitudinal optical LO phonons) is [18] where ω x y = ω ε-Si − ω Si and ω z = ω ε-Si − ω Si are expressed in the Raman frequency in each mode in the presence (ω ε-Si ) and absence (ω Si ) of stress. The S 11 = 7.68 and S 12 = −2.14 (in GPa) are the elastic compliance tensor elements, while p = −1.85 and q = −2.31 are the phonon deformation potentials for bulk Si at room temperature [19]. The stress relaxation ratio upon patterning at any position in the nanowire is calculated using R(x, y) = 1 − σ nanowire (x, y)/σ 0 , where σ 0 and σ nanowire correspond to the initial stress before nanopatterning and the measured stress in nanowires, respectively [22]. The initial stress, σ 0 , in the nanomembrane is calculated from the measured Raman shift using equation (2). However, it is important to note that equation (2) cannot be used for the extraction of stress in nanowires, σ nanowire , mainly because the assumption that shear stress is zero used to obtain the stress is no longer valid as the stress is no longer bi-isotropic in the nanowire due to geometrical constraints and non-uniform rearrangements of atoms near the newly formed free surfaces. In fact, in a nanowire the shearing strain terms at the crystal coordinate are no longer zero due to anisotropic strain relaxation along the nanowire axes. In order to obtain the nanowire stress, we first calculate the strain and stress components in the sample coordinates where shearing terms are all zero due to the symmetry of the structure at the sample coordinate system and then transform them into the crystal coordinate [20]. At the center of the nanowire, the strain component along the nanowire length (ε x x (x , d)) has almost the value of the original membrane, ε 0 , but the stress along the width (σ yy (x , d)) will be dramatically relaxed due to the reduced dimension in this direction. d represents the width of nanowires. The shear strain component becomes zero in the middle of the nanowire, and the stress component along z is equal to zero due to free surface boundary conditions; the strain and stress tensors are then expressed as [20] {ε where α(x , d) = σ yy (x , d)/σ 0 , s = C 11 + C 12 − 2C 2 12 /C 11 and H = 2C 44 + C 12 − C 11 . The elastic constants C 11 = C 22 = 166 GPa, C 12 = C 21 = 64 GPa, C 44 = 79.6 GPa, C 12 = C 12 − 0.5H = 35.4 GPa, C 11 = C 11 + 0.5H = 194.6 GPa and C 13 = C 12 = 64 GPa are obtained from [20,21]. The constant α(x , d) varies between 0 and 1 and depends on the diameter of the nanowire. At the limits, α(x , d) equals 0 for a uniaxial stress and 1 for a bi-isotropic stress. After substitution, the strain tensor in the sample coordinate reduces to and after transformation to the crystal coordinate system: Using the strain components above, the solution to the well-known secular equation for the LO phonon mode is expressed as In equation (6), there are two unknown parameters, strain ε x x (x , d) along the nanowire long x -axis and α(x , d), the stress relaxation factor along the nanowire width which we have to extract from the Raman measurement. However, assuming we detect only the LO phonon mode by conventional backscattering geometry, it is impossible to determine the two unknown parameters from the single phonon mode. To circumvent this limitation and obtain the stress in nanowires, it is widely accepted that the stress along the width of the nanowire can be assumed to be fully relaxed, that is α(x , d) = 0 (see e.g. [13,20,21] and references therein). Herein, we demonstrate that this assumption is too simplistic leading to an inaccurate analysis of stress. Using equation (5) and at α(x , d) = 0, the stress along the x -axis is given by σ x x (x , d) = 169.25 × ε x x (x , d). By substituting the strain components in equation (6), ε zz = −0.36 × ε x x (x , d) and ε x x = ε yy = 0.47 × ε x x (x , d), into equation (7), the stress (in GPa) in the long axis can be expressed in terms of the measured Raman shift as In an alternative approach, we propose the introduction of an edge structure (edge parallel to the y -axis) consisting of a patterned nanomembrane with a practically semi-infinite dimension along y (>1 cm), which can be assumed as a hypothetical nanowire with an infinite width (d = ∞). Equation (8) indicates that there are two unknown parameters, ε x x (x , d) and α(x , d), while the experiment only provides the values of ω z ( figure 2(b)). For α(x , d) = 0, ε x x (x , d) can be determined using the reference structure (d = ∞). Here, the nanowire strain profile along the x -axis, ε x x (x , d), is comparable with the profile along the same direction in the reference structure, ε x x (x , d = ∞) = ε x x (x , d), because at an identical length (1 µm) the effect of the newly formed free surfaces should be similar in both systems. In the y -axis, the reference sample is infinite and thus the initial strain is preserved (i.e. no relaxation does take place along this axis), suggesting that ε yy (x , d = ∞) is equivalent to the initial strain in the unpatterned nanomembrane, ε 0 .
Analogously to equation (8), we introduce an additional anisotropic stress relaxation factor, β(x , d = ∞) = σ x x (x , d = ∞)/σ 0 , which represents the relative stress relaxation along the xaxis in the reference structure. Thus, the equation of Raman shift as a function of β(x , d = ∞) can be written as where ε yy (x , d = ∞) is obtained from the Raman signal of the unpatterned nanomembrane, whereas ω z is obtained from Raman scans along the x -axis from the edge. By using the obtained values of ε yy (x , d = ∞) and ω z , we can determine β. Thus, the strain along the x -axis of the edge sample can be extracted similarly to equation (5) using β: This strain is only a function of x and varies from x = 0 (edge) to an x = 0.5 µm around the nanowire center. The strain component, ε yy (x , d = ∞) (parallel to the edge), remains unchanged. This approach is analogous to the semi-infinitely wide line structures used in the theoretical treatment developed earlier by Jain et al [20]. The measured Raman shifts from the edge structures were attributed mostly to the strain along x , ε x x (x , d = ∞), and the constant strain component in the y -direction, ε 0 , which is the same as the strain obtained from the nanomembrane. In the following, we test the validity of these two approaches.
Results and discussion
Figure 2(a) shows the Raman spectra obtained with the 442 nm excitation at the center of the nanowires using the z(x x )z configuration. We reported recently that TO phonons can be effectively observed using high NA illumination [10]. The observed TO-active Raman shifts are upshifted relative to LO phonons as predicted by the association of equations (1) and (2) [10]. This upward shift in the Raman spectra arising from the TO phonon can be easily mixed with the LO phonon signal and consequently overestimates the level of stress relaxation. This is because the spectral split between LO and TO phonons is small to be resolved by peak function fits. Proper selection of sample azimuth, incident polarization and analyzer orientation is therefore necessary to accurately uncover the stress behavior in strained Si nanowires. We observed that the z(x x )z configuration is constantly lower than the z(y y )z configuration for all regions in the nanowires. Under z(x x )z, that is the incident polarization is parallel to the nanowire axis, the observed Raman shift is predominantly due to the LO phonon. For the case of z(y y )z, where incident polarization is perpendicular to the nanowire axis, the observed Raman shift consists of both LO and TO phonons [18]. This is because in the z(y y )z configuration the transverse (E y ) electric field is efficiently converted to the E z -field when introduced to the (110) and (−110) sidewalls, which results in effective excitation TO phonons [10]. The TO scattered Raman is efficiently collected by backscattering geometry [18]. From our previous work [10], we experimentally verified that TO phonons in nanowires have higher Raman peak shift relative to LO phonons. This explains why the Raman peak shift for z(y y )z is higher and broader (data not shown) than for z(x x )z. Hence, in order to obtain an accurate in-plane tensile stress from LO phonons, it is important to choose the incident polarization parallel to the nanowire.
Here, the Raman signal is collected from the whole thickness of the nanowire (i.e. 15 nm). The measured spectra are 'background-free' and contain only the intrinsic Si-Si Raman shift peak of the probed nanowire because of the introduction of the Ge interlayer described above. It should be noted that the dominant contribution to the measured Raman shift originated from the LO phonon mode from both the top (001) surface and the long axis sidewalls, {110}, based on the Raman tensor calculation under x -polarized light. For the sake of comparison, the Raman spectrum of the initial strained nanomembrane (solid black line) as well as the Si-phonon peak position in bulk Si (broken vertical lines) is shown. Table 1 summarizes the Raman peak and spectral width for all the observed structures. As the width decreases, the Raman peaks shift upward with respect to the position of the Si-phonon peak of the initial strained Si layer. No significant broadening in the nanowire Si-Si Raman mode is, however, observed. It is noticeable that the Si-Si peak position of the investigated nanowires is upshifted with respect to its position in the initial ( ω z = 515.42 cm −1 ) strained nanomembrane. This shift is indicative of partial relaxation of stress during the process of nanowire patterning using reactive ion etching.
To obtain more insights into the complex redistribution of stress as nanowire dimension shrinks, the profiles of the Raman shift for nanowires of varying width are plotted in figure 2(b). The nanowires were scanned along the x -axis with a step size of 25 nm using the z(x x )z configuration. The Raman shift for d = 30 nm using 355 nm (UV) excitation is also displayed in figure 2(b). At this excitation, only the top 10 nm of the nanowire thickness is probed. For comparison, the Raman shifts measured from the original membrane are indicated with solid (442 nm) and dashed (355 nm) lines. Figure 3 shows the stress profiles using (a) firstly, α(x , d) = 0 and (b) secondly, ε x x (x , d) = ε x x (x , d = ∞) approaches. Regardless of the approaches used, the profiles measured for all the nanowires display qualitatively similar behavior characterized by a more pronounced relaxation of stress near the nanowire edge. Moreover, as a general trend, the residual stress profile shows plateau-like profiles in the region around the center. The breadth of this region is sensitive to the nanowire width and varies from ∼400 nm at d = 50 or 80 nm to ∼700 nm at d = 30 nm. The enhanced relaxation near the edges results from the additional free surface as compared to the rest of the nanowires (four versus three facets). As described below, this constant stress value away from the edges can be attributed to the nearly constant and highly retained stress at the buried strained-Si/SiO 2 interface.
The obtained initial stress from the nanomembrane using equation (2) was plotted (solid black) to evaluate the validity of the first approach based on the assumption of a full relaxation along the nanowire width (α(x , d) = 0). Strikingly, the stress profile along the long axis obtained using this approach is higher than the original stress in the nanomembrane. This discrepancy indicates that the assumption of a full relaxation along the width is inaccurate and that stress is not uniaxial. In the following, we focus our analysis on the second approach. We first calculated the strain from equations (10) and (11) using the measured Raman shift scanned from x = 0 (edge) to x = 0.5 µm (towards the center) with semi-infinite width. The strain values are then used in equations (8) and (9) to calculate the stress at each position. Figure 3(b) shows the obtained stress profiles σ x x (x , d) and σ yy (x , d). The calculated stress values along the x -direction are below the original stress of the membrane. It is also interesting to note that the profile of the stress along the x -direction of the nanowires is similar and with stress values very close to stress of the edge sample, suggesting that stress is preserved at the center. Moreover, the stress along the x -direction at the edges (x = 0 and 1.0 µm) is the same (σ x x (x , d) ≈ 0.85 GPa) for all nanowires regardless of excitation wavelengths. This is not the case when a full relaxation is assumed along the width, which indicates that this assumption is fraught with large uncertainties. On the other hand, we can clearly see from the profile along the width, σ yy (x , d), that the nanowire is still under a biaxial stress.
The stress profile along the width is relatively constant through the whole length of the nanowire unlike along the long axis where pronounced relaxation is observed at the edges. Furthermore, a full relaxation of stress is only observed at the edge of the nanowire with d = 30 nm. For this set of nanowires, the stress at the center reaches a value of ∼200 MPa. This behavior is observed for both excitation wavelengths. At d = 80 and 50 nm, the stress is well above the fully relaxed region (gray dashed zero line). This result indicates that the overall average stress in the nanowire is biaxial but not bi-isotropic. Note that at an excitation wavelength of 442 nm the buried strained-Si/SiO 2 interface is also probed. Here, the stress between 200 x 800 nm is comparable with the initial stress in the nanomembranes, whereas at 355 nm the initial stress is only preserved in the region 450 x 550 nm. Figure 3(c) exhibits the relaxation ratio at the center of the nanowire as a function of nanowire diameter. It is noticeable that the extent of this relaxation along the width exhibits linear behavior as a function of the nanowire width: σ yy /σ 0 = 1.01-5.77 × 10 −3 × d. The relaxation along the long axis is small and remains unchanged with the nanowire width.
Interestingly, the combination of two excitation wavelengths provides new insights into the complex behavior of stress in nanowires. Figure 4 shows the depth dependence of stress profiles along the two in-plane axes for a nanowire with a width of d = 30 nm under the z(x x )z condition using different excitation wavelengths (penetration depths), namely 442 nm (∼168 nm) and 355 nm (∼10 nm). It is noteworthy that Raman shifts measured using the 355 nm laser are centered around higher wavenumbers as compared with those obtained using the 442 nm excitation as shown in figure 2(b). Figure 4(a) also displays the original stress (lines) in the unpatterned nanomembrane obtained using the two-excitation wavelengths as well. The small difference in the measured stress using the two excitations suggests that the region near the surface of the nanomembranes is slightly relaxed.
After patterning of the nanowires, the stresses are observed to be more relaxed in both the 442 and 355 nm excitations. However, the interesting observation is that the stress relaxation behavior as obtained by the 355 nm excitation decreases monotonously toward the center of the nanowire, which remains highly strained. At the 442 nm excitation, the stress relaxation plateaus in the region 200 x 850 nm. This dissimilarity in stress behavior between the two excitation wavelengths is indicative of the non-uniform distribution of the in-plane stress along the nanowire thickness. More precisely, the fact that the stress measured using a laser with the shallower penetration depth is systematically smaller than the value averaged over the whole thickness provides direct evidence that the region near the nanowire-oxide interface maintains a high level of stress. By combining the stress profiles measured at the 442 nm(σ 15 nm x x ) and 355 nm σ 10 nm x x excitations, we extract the stress profile in the bottom 5 nm of the nanowire along the x -direction σ 5nm 4(a)). Similarly, the stress profile along the y -direction is depicted in figure 4(b). For the two in-plane axes, the obtained stress profiles are qualitatively identical but remarkably different from the profiles measured for the top 10 nm or over the whole nanowire thickness. Particularly, the part of the nanowire near the interface is under a higher stress especially in the region about 100-200 nm away from the edge, in qualitative agreement with finite element simulations (not shown). In this region, close stress values are recorded for the two in-plane directions, indicating that the interface with the oxide preserves almost the initial isotropy (i.e. a biaxial strain). Interestingly, the stress in the x -direction reaches values that are higher than the initial stress, indicating that the contraction of the lattice near the newly formed free surfaces is accompanied by a strong distortion close to the interface with the oxide. This means that when the lattice near the edges moves inwards, it drags the underlying substrate leading to a complex redistribution of stress. Above ∼200 nm away from the edge, the stress decreases slightly along the nanowire length to stabilize around 1.2 GPa. The decrease is more significant along the width. For this direction, the in-plane stress remains the same independently of the depth in the nanowire. Similar behavior is also observed for the edge sample as shown in figure 4(c). The only noticeable difference in the bottom 5 nm stress values along the long axis between the d = 30 nm nanowire and the reference sample is located at the edges. The bottom 5 nm stress at the edge is ∼1.06 GPa, which is just above the stress measured with 355 nm excitation whereas for the d = 30 nm nanowire, the bottom 5 nm stress is 0.83 GPa. Toward the center, the bottom 5 nm stress values remain unchanged.
Conclusion
In summary, we presented a precise method to map the stress in individual strained Si nanowires directly on oxide by using background-free, high-resolution polarized Raman spectroscopy with two excitation wavelengths. We also derived a theoretical framework to extract the stress from Raman shifts, taking into account the stress anisotropy imposed by the nanowire quasi-onedimensional morphology. We found that adjusting the incident polarization parallel to the long axis of the nanowire is necessary for an accurate Raman probe. Our analysis demonstrates that the assumption of a full relaxation along the shortest dimension is not a valid approximation even for a dimension as small as 30 nm. The obtained stress profiles unraveled the complexity of free surface-induced stress redistribution in patterned nanowires. Besides the anticipated increase in the importance of relaxation for narrower nanowires, the use of two excitation wavelengths provided unprecedented details on the behavior of stress. The obtained stress profiles at different depths clearly demonstrate that the final stress distribution is rather complex, varying from nearly bi-isotropic to uniaxial within the same nanowire. This transition from an initially uniform, isotropic stress to a heterogeneous distribution is the result of a simultaneous contraction of the lattice near the newly formed free surfaces and a tension near the interface with the oxide. The fabrication and design of strained Si nanowire-based devices should take into account this non-uniform distribution of stress. Moreover, these observations provide the basis for more accurate calculations and modeling of stress behavior in nanowires and the associated changes in the physical properties. | 7,833 | 2013-05-28T00:00:00.000 | [
"Physics",
"Materials Science",
"Engineering"
] |
Second order accurate distributed eigenvector computation for extremely large matrices
We propose a second-order accurate method to estimate the eigenvectors of extremely large matrices thereby addressing a problem of relevance to statisticians working in the analysis of very large datasets. More specifically, we show that averaging eigenvectors of randomly subsampled matrices efficiently approximates the true eigenvectors of the original matrix under certain conditions on the incoherence of the spectral decomposition. This incoherence assumption is typically milder than those made in matrix completion and allows eigenvectors to be sparse. We discuss applications to spectral methods in dimensionality reduction and information retrieval.
Introduction
Spectral methods have a long list of applications in statistics and machine learning.Beyond dimensionality reduction techniques such as PCA or CCA [And03,MKB79], they have been used in clustering [NJW02], ranking & information retrieval [PBMW98,HTF + 01,LM05] or classification for example.Computationally, one of the most attractive features of these methods is their low numerical cost, in particular on problems where the data matrix is sparse (e.g.graph clustering or information retrieval).Computing a few leading eigenvalues and eigenvectors of a matrix, using the power or Lanczos methods for example, requires performing a sequence of matrix vector products and can be processed very efficiently.This means that when the matrix is dense and has dimension n, the cost of each iteration is O(n 2 ) in both storage and flops.
However, for extremely large scale problems arising in statistics or information retrieval for example, this cost quickly becomes prohibitively high and makes spectral methods impractical.In this paper, we propose a randomized, distributed algorithm to estimate eigenvectors (and eigenvalues) which makes spectral methods tractable on very large scale matrices.We show that our method is second order accurate and illustrate its performance on a few realistic datasets.
Going back to the numerical cost of spectral methods, we see that decomposing each matrix vector product in many smaller block operations partially alleviates the complexity problem, but makes the overall process very bandwidth intensive.Decomposition techniques thus improve the granularity of iterative eigenvalue methods (i.e.require many cheaper operations instead of a single very expensive one), but at the expense of significantly higher bandwidth requirements.Here, we focus on methods that improve the granularity of large-scale eigenvalue computations while having very low bandwidth requirements, meaning that they can be fully distributed over many loosely connected machines.
The idea of using subsampling to lower the complexity of spectral methods can be traced back at least to [GMKG91,PRTV00] who described algorithms based on subsampling and random projections respectively.Explicit error estimates followed in [FKV04, DKM06,AM07] which bounded the approximation error of either elementwise or columnwise matrix subsampling procedures.On the application side, a lot of work has been focused on the Pagerank vector, and [NZJ01] in particular study its stability under perturbations of the network matrix.Similar techniques are applied to spectral clustering in [HYJT08] and both works have close connections to ours.Following the Netflix competition on collaborative filtering, a more recent stream of works [RFP07, CR08, CT09, KMO09] has also been focused on exactly reconstructing a low rank matrix from a small, single incoherent set of observations.Finally, more recent "volume sampling" results provide relative error bounds [KV09], but so far, the sampling probabilities required to obtain these improved error bounds remain combinatorially hard to compute.
Our work here is focused on the impact of subsampling on eigenvector approximations.First we seek to understand how far we can reduce the granularity of eigenvalue methods using subsampling, before reconstructing eigenvectors becomes impossible.This question was partially answered in [CT09,KMO09] for matrices with low rank, incoherent spectrum, using a single subset of matrix coefficients, after solving a convex program with high complexity.Here we make much milder assumptions on matrix incoherence.In particular, we allow some eigenvectors to be sparse (while remaining incoherent on their support) and we approximate eigenvectors using many simple operations on subsampled matrices.Under certain conditions on the sampling rate which guarantee that we remain in a perturbative setting, we show that simply averaging many approximate eigenvectors obtained by subsampling reduces approximation error by an order of magnitude.
Notation.
In what follows, we write S n the set of symmetric matrices of dimension n.For a matrix X ∈ R m×n , we write X F its Frobenius norm, X 2 its spectral norm, σ i (X) its i-th largest singular value and let X ∞ = max ij |X ij |, while Card(X) is the number of nonzero coefficients in X.We denote by X(i, j) or X ij its (i, j)-th element and by M i the i-th column of M .Here, • denotes the Hadamard (i.e entrywise) product of matrices.When x ∈ R n is a vector, we write its Euclidean norm x 2 and x ∞ its ℓ ∞ norm.We write 1 ∈ R n the vector having all entries equal to 1. Finally, κ denotes a generic constant, whose value may change from display to display.
Subsampling
We first recall the subsampling procedure in [AM07] which approximates a symmetric matrix M ∈ S n using a subset of its coefficients.The entries of M are independently sampled as where p ∈ [0, 1] is the sampling probability.Theorem 1.4 in [AM07] shows that when n is large enough holds with high probability.In what follows, we will prove a similar bound on M − S 2 using incoherence conditions on the spectral decomposition of M .
Computational benefits
Computing k leading eigenvectors and eigenvalues of a symmetric matrix of dimension n using iterative algorithms such as the power or Lanczos methods (see [GVL90, Chap.8-9] for example) only requires matrix vector products, hence can be performed in O(kn 2 ) flops when the matrix is dense.However, this cost is reduced to O(k Card(M )) flops for sparse matrices M .Because the matrix S defined in (1) has only pn 2 nonzero coefficients on average, the cost of computing k leading eigenvalues/eigenvectors of S will typically be 1/p times smaller than that of performing the same task on the full matrix M .Of course, sampling the matrix S still requires O(n 2 ) flops, but can be done in a single pass over the data and be fully distributed.In what follows, we will show that, under incoherence conditions, averaging the eigenvectors of many independently subsampled matrices produces second order accurate approximations of the original spectral decomposition.While the global computational cost of this averaging procedure may not be globally lower, it is decomposed into many much smaller computations, and is thus particularly well adapted to large clusters of simple, loosely connected machines (Amazon EC2, Hadoop, etc.). ... Cost Figure 1: Our objective here is to approximate the spectral decomposition problem of size O(n 2 ) by solving many independent problems of much smaller size.
Sparse matrix approximations
Let us write the spectral decomposition of M ∈ S n as where u i ∈ R n for i = 1, . . ., n and λ ∈ R n are the eigenvalues of M with λ 1 > . . .> λ n (we assume they are all distinct).Let α ∈ [0, 1] n , we measure the incoherence of the matrix M as Note that this definition is slightly different from that used in [CT09] because we do not seek to reconstruct the matrix M exactly, so the tail of the spectrum can be partially neglected in our case.As we will see below, the fact that we only seek an approximation also allows us to handle sparse eigenvectors.
Let us define a matrix Q ∈ S n with i.i.d.Bernoulli coefficients We can write where C is has i.i.d.entries with mean zero and variance one, defined as We can now write the sampled matrix S in (1) as and we now seek to bound the spectral norm of the residual matrix E as n goes to infinity.Naturally, if E 2 is small, S is a good approximation of M in spectral terms, because of Weyl's inequality and the Davis-Kahan sin(θ)-theorem (see [Bha97]).So our aim now is to control E 2 so we can guarantee the quality of spectral approximations of M made using the sparse matrix S which is computationally easier to work with than the dense matrix M .We now make the following key assumptions on the incoherence of the matrix M .Assumption 1.There is a sequence of vectors α (n) ∈ [0, 1] n for which as n goes to infinity, where µ is an absolute constant.
In what follows, we will drop the dependence of α on n to make the notation less cumbersome, so instead of writing α (n) we will just write α.We have the following theorem.
Theorem 1. Suppose that Assumption 1 holds.Let us call α min = min 1≤i≤n α i .Assume that p and n are such that, p < 1/2, and for a given δ > 0, α min > (log n) (δ−3)/4 and Proof.Using [HJ91, Th. 5.5.19] or the fact that uu T • C = D u CD u , where D u is a diagonal matrix with the vector u on the diagonal (remember that • 2 is a matrix norm and hence sub-multiplicative), we get Since we assume that the vector u i is sparse with Card(u i ) ≤ n α i , C α i is a principal submatrix of C with dimension n α i .Now, we show in Theorem A-1 (this is the key element of the proof -see p.17) that , and α min > (log n) (δ−3)/4 for some δ > 0. (Our proof of Theorem A-1 relies on a result of Vu [Vu07] and Talagrand's inequality.).This yields Equation (5) and concludes the proof.
The proof of the theorem makes clear that the error term coming from the sparsest eigenvector will usually dominate all the others in the residual matrix E.
In these approximation methods, we naturally want to use a small p, so that S is very sparse and the computation of its spectral decomposition is numerically cheap.The result of Theorem A-2 guarantees that the subsampling approximation works whenever p ≫ (α min log n) 4 /n α min (asymptotically, but we have in mind a very high-dimensional setting, so n will be large in practice).
A natural question is therefore whether we could use p much smaller than this.Separate computations (see Subsection A-3) indicate that C/n 1/2 2 goes to infinity if p ≤ (log n) 1−δ /n, which suggests that this subsampling approach to approximating eigenproperties of M might run into trouble if the sampling rate p gets smaller than log n/n.As a matter of fact, we could not control the quantities C α i /n α i /2 2 at this sampling rate, which is naturally problematic given the way we established the bound on E 2 .Furthermore, if the sparsest eigenvector had support disjoint from the supports of all other eigenvectors, E would be the sum of two block diagonal matrices.Hence, its operator norm would be the maximum of the operator norms of the two blocks, at least one of which having potentially very large operator norm.
Tightness
Note that, in the limit case α = 1 where the eigenvectors are fully dense and incoherent, our bound is similar to the original bound in [AM07, Theorem 1.4] or that of [KMO09, Th 1.1] (our model for M is completely different however).In fact, the bounds in (2) and (5) can be directly compared.In the fully dense case where α = 1, we have so in this limit case, the original bound in (2) is always tighter than our bound in (5).However, in the sparse incoherent case where α = 1, the ratio of the bound (2) in [AM07] over our bound (5) becomes which can be large when α min < 1.The results in [KMO09], which are focused on exact recovery of low rank incoherent matrices, do not apply when the eigenvectors are sparse (i.e.α = 1).
Approximating eigenvectors
We now study the impact of subsampling on the eigenvectors and in particular on the one associated with the largest eigenvalue.We have the following theorem.
Theorem 2. Assume that the eigenvalues of M are simple.Let us call v k ∈ R n and λ k (S) the k-th eigenpair of S, and u k ∈ R n , λ k the k-th eigenpair of M .We write R k the reduced resolvent of M associated with u k , defined as and let We also call d k the separation distance of λ k , i.e the distance from λ k to the nearest eigenvalue of Proof.From now on we focus on u k and drop the dependence on k in u k , v k , R k , ∆ k etc... when this does not create confusion.We also use the notation λ S and λ instead of λ k (S) and where γ = λ S − λ.The formula is valid as soon as and assume that ∆ has no eigenvalues equal to -1, i.e Id + ∆ is invertible.Then we have We also have by construction Ru = 0, so REu = ∆u.Hence, we can write Putting all the elements together and recalling that u 2 = 1, we get (7) from Equation (8).
Spectral methods tend to focus on eigenvectors associated with extremal eigenvalues, so let us elaborate on the meaning of Theorem 2 for the eigenvector associated with the largest eigenvalue.If we suppose that the spectral norm of the residual matrix E is smaller than half the separation distance of the largest eigenvalue, i.e the previous result (and results such as [Kat95, Theorem II.3.9])shows that we can use perturbation expansions to approximate the leading eigenvector of the subsampled matrix.Based on the bound in Equation ( 5), the condition stated in Equation (9) will be satisfied (asymptotically with high-probability) if, for some We note that assumption (9) is likely reasonable if one eigenvalue is very large compared to the others, which is a natural setting for methods such as PCA.(Note however that our result is not limited to the largest eigenvalue but actually applies to any eigenvalue of the original matrix M , λ, for which E 2 is smaller than half the distance from λ to any other eigenvalue of M .In particular, the result would apply to several separated eigenvalues.)We also note that the approximation is accurate to order j + 2.
Let us now try to make our approximation slightly more explicit.If we write R the reduced resolvent of M (associated with u 1 ), and assume that λ 1 − λ 2 stays bounded away from 0, we have in this setting, using Equation (7) with j = 1, after we account for the fact that u T Eu is an order-E 2 2 accurate approximation of λ 1 (S) − λ 1 [Kat95, Eq. 2.36 and 3.18].This approximation makes clear that a key component in the accuracy of our approximations will be the size of the vector Eu.For simplicity here, we have normalized v so that v T u = 1; a similar result holds if we set v T v = 1 instead, if for instance E 2 → 0 asymptotically.
Second order accuracy result for eigenvectors by averaging
In light of Equation (10), it is clear that v is a first order accurate approximation of u, because of the presence of the (first-order) term REu in the expansion.We now show that we can get a second order accurate approximation of the eigenvector u.Our results are based on an averaging procedure and hence are easy to implement in a distributed fashion.We have the following second-order accuracy result.
Theorem 3. Let us call u 1 the eigenvector associated with the largest eigenvalue of M , and ν 1 = v 1 / v 1 the eigenvector associated with the largest eigenvalue of S and normalized so that ν 1 = 1 and ν T 1 u 1 ≥ 0. Let us call ξ = µ/(pn α min ) 1/2 .Suppose that the assumptions of Theorem 1 are satisfied (hence ξ → 0).
Suppose also that
Then we have Practically, this means that if we average eigenvectors over many subsampled matrices (after removing indeterminacy by always making the first component positive), the residual error will be of order In other words, by averaging subsampled eigenvectors, we gain an order of accuracy (over the method that would just take one subsampled eigenvector) by canceling the effect of the first order residual term REu.
Proof.To keep notations simple, we drop the index 1 in ν and u in the proof (so ν 1 = ν and u 1 = u).In what follows, κ is a generic constant that may change from display to display.Before we start the proof per se, let us make a few remarks.First, there is a technical difficulty when trying to work directly with v, namely the fact that it appears difficult to control E (Id + ∆) −1 2 and hence to get a bound on E[ v − u ] (with the normalization v T u = 1, v could be very large; our bounds show that this can happen with only low probability but obviously E[ v ] could still be large).To go around this difficulty, we need two steps: first, we work with unit eigenvectors (so we go from v to ν), and second we need a "regularization" step and will replace v by a vector ṽε which is equal to v with high-probability and for which we can control E[ ṽε − u ].More precisely, for ε > 0, we call ṽε the vector such that Its properties are studied in Theorem A-3.We call it below the ε-regularized version of v.
We note that under the assumptions of the current theorem we have ξ d → 0, so the results of Theorem A-3 apply.In particular, as shown in the proof of that Theorem, we have M 2 ∞ /p 2 = o ξ 2 .Also, Assumption 1 (which is made in Theorem 1), means µ is fixed so ξ → 0, as pn α min → ∞.
If v is the eigenvector of S associated with its largest eigenvalue, using the fact that (v − u) T u = 0 by construction, we have Turning our attention to ṽε , we see that, since Ru = 0 by construction and R is symmetric, u T ∆ = 0, so (ṽ ε − u) T u = 0, and hence ṽε we see that β = ν as long as (Id + ∆) −1 2 ≤ 1/ε, since when this happens, v = ṽε .Now we have (note the importance of the change of normalization here, as this bound would not hold with v instead of ν).Let us now work on controlling both these quantities.For reasons that will be clear later, we now take ε = 2ξ/d.
We show in Theorem A-3 that, for some κ > 0, asymptotically Control of P (ν = β).We have (essentially) seen in the proof of Theorem 2 above that if 2 E 2 /d < 1 − ε, then (Id + ∆) −1 2 ≤ 1/ε (see also the proof of Theorem A-3).Hence Recall that we have now chosen ε = 2ξ/d.In that case, we have
Now we show the following deviation inequality in Theorem
Recall also that for n large enough 0 ≤ m E ≤ 3ξ when the conditions of Theorem 1 apply (see Theorems 1 or arguments at the end of the proof of Theorem A-1).Suppose now that n is such that indeed m E ≤ 3ξ.
Then if d
2 − 4ξ > 0, we have 3 asymptotically.Since we assumed that d ≥ ξ ln(ξ −2 ) and ξ → 0, we indeed have ξ/d → 0. Therefore, All we have to do now is to verify that the asymptotics we consider, the quantity on the right-hand side of the previous equation remains less than ξ 2 /d 2 asymptotically.Elementary algebra shows that this is equivalent to saying that We have M 2 ∞ /p 2 = o ξ 2 , so the right-hand side is going to zero.In particular, we see that when d ≥ ξ ln(ξ −2 ), as we assume, the inequality above is satisfied asymptotically.As a matter of fact, when d < exp(1), and the result comes out of the fact that , the result is obvious as the righthand side of Equation ( 12) goes to 0 asymptotically, while the left-hand side is asymptotically larger than exp(2)/2 for instance.So we have shown that under our assumptions, We can finally conclude that This result applies to all eigenvectors corresponding to eigenvalues whose isolation distance (i.e distance to the nearest eigenvalue) satisfies the separation condition (11), which is a strong version of the separation condition (9).We note that we need the strong separation condition (Equation (11)) to be able to take expectations rigorously.
Finally, we note that theoretical as well as practical considerations seem to indicate that condition (9) (and hence (11)) is quite conservative.On the theoretical side, we see with Equation ( 8) that what really matters for the quality of the approximation is the norm of the vector or its expectation.We used in our approximations the coarse bound ∆ 2 ≤ 2 R 2 E 2 , which is convenient because it does not require us to have information about the eigenvectors of ∆.However, we see that the norm of l j could be small even when R 2 E 2 is not very small, for instance if u belonged to a subspace spanned by eigenvectors of ∆ associated with eigenvalues of this matrix that are small in absolute value.So it is quite possible that our method could work in a somewhat larger range of situations than the one for which we have theoretical guarantees.This is what our simulations below seem to indicate.
Variance
The expansion in Equation (10) also allows us to approximate the variance of the first-order residual REu after subsampling.This is useful in practice because it gives us an idea of how many independent computations we need to make to essentially void the effect of the first order term in the expansion of v.In terms of distributed computing, it therefore tells us how many machines we should involve in the computation.We have the following theorem.
Theorem 4. Let u 1 be the eigenvector associated with λ 1 , the largest eigenvalue of M .Let us call Assuming w.l.o.g. that λ 1 = M 2 , this bound yields in particular where NumRank(M ) = M 2 F / M 2 2 is the numerical rank of the matrix M and is a stable relaxation of the rank, satisfying 1 ≤ NumRank(M ) ≤ Rank(M ) ≤ n (see [RV07] for a discussion).
Proof.By construction, E[E] = 0 and because E is symmetric, the u i 's form an orthonormal basis and u T 1 Eu j is the j-th coefficient of Eu 1 in this basis, so the sum of the squared coefficients is the squared norm of the vector.Hence The variance of u T 1 Eu 1 is easy to compute if we rewrite this quantity as a sum of independent random variables.Also, separate computations (see Appendix, Subsection A-4) show that Assuming w.l.o.g. that λ 1 = M 2 , we get (13).
Nonsymmetric matrices
The results described above are easily extended to nonsymmetric matrices.Here M ∈ R m×n , with m ≥ n and we write its spectral decomposition where We can adapt the definition of incoherence to and reformulate our main assumption on M as follows.
In this setting, using again [HJ91, Th. 5.5.19],we get where we have assumed that u i , v i are sparse and C α i ,β i is a n α i × m β i submatrix of C. As in (5), we can then bound the spectral norm of the residual and we have almost surely.Perturbation results similar to (10) for left and right eigenvectors are detailed in [Ste98] for example.
Numerical experiments
In this section, we study the numerical performance of the subsampling/averaging results detailed above on both artificial and realistic data matrices Dense matrices: PCA, SVD, etc.We first illustrate our results by approximating the leading eigenvector of a matrix M as the average of leading eigenvectors of subsampled matrices, for various values of the sampling probability p.To start with a naturally structured dense matrix, we form M as the covariance matrix of the 500 most active genes in the colon cancer data set in [ABN + 99].We let p vary from 10 −4 to 1 and for each p, we compute the leading eigenvector of 1000 subsampled matrices, average these vectors and normalize the result.We call u the true leading eigenvector of M and v the approximate one.We now normalize v so that v 2 = 1 (which is standard, but different from the normalization we used in our theoretical investigations where we had u T v = 1).In Figure 2, we plot u T v as a function of p together with the median of u T v sampled over all individual subsampled matrices, with dotted lines at plus and minus one standard deviation.We also record the proportion of samples where E satisfies the perturbation condition (9).
We repeat this experiment on a (nonsymmetric) term-document matrix formed using press release data from PRnewswire, to test the impact of subsampling on Latent Semantic Indexing results.Once again, we let p vary from 10 −2 to 1 and for each p, we compute the leading eigenvector of 1000 subsampled matrices, average these vectors and normalize the result.We call u the true leading eigenvector of M and v the approximate one.In Figure 3 on the left, we plot u T v as a function of p together with the median of u T v sampled over all individual subsampled matrices, with dotted lines at plus and minus one standard deviation.The matrix M is 6779 × 11171 with spectral gap σ 2 /σ 1 = 0.66.
In Figure 3 on the right, we plot the ratio of CPU time for subsampling a gene expression matrix of dimension 2000 and computing the leading eigenvector of the subsampled matrix (on a single machine), over CPU time for computing the leading eigenvector of the original matrix.Two regimes appear, one where the eigenvalue computation dominates with computation cost scaling with p, another where the sampling cost dominates and the speedup is simply the ratio between sampling time and the CPU cost of a full eigenvector computation.Of course, the principal computational benefit of subsampling is the fact that memory usage is directly proportional to p.
A key difference between the experiments of Figure 2 and those of 3 is that the leading eigenvector of the gene expression data set is much more incoherent than the leading left eigenvector of the term-document matrix, which explains part of the difference in performance.We compare both eigenvectors in Figure 4.
We then study the impact of the number of samples on precision.We use again the colon cancer data set in [ABN + 99].In Figure 5 on the left, we fix the sampling rate at p = 10 −2 and plot u T v as a function of the number of samples used in averaging.We also measure the impact of the eigenvalue gap λ 2 /λ 1 on precision.We scale the spectrum of the gene expression covariance matrix so that its first eigenvalue is λ 1 = 1 and plot the alignment u T v between the true and the normalized average of 100 subsampled eigenvectors over subsampling probabilities p ∈ [10 −2 , 1] for various values of the spectral gap λ 2 /λ 1 ∈ {0.75, 0.95, 0.99}.
Graph matrices: ranking.Here, we test the performance of the methods described above on graph matrices used in ranking algorithms such as pagerank [PBMW98] (because of its susceptibility to manipulations however, this is only one of many features used by search engines).Suppose we are given the adjacency matrix of a web graph, with where A ∈ R n×n (one such matrix is displayed in Figure 6).Whenever a node has no out-links, we link it with every other node in the graph, so that B = A + δ1 T /n, with δ i = 1 if and only if deg i = 0, where deg i is the degree of node i.We then normalize B into a stochastic matrix P g ij = B ij /deg i .The matrix P g is the transition matrix of a Markov chain on the graph modeling the behavior of a web surfer randomly clicking on links at every page.For most web graphs, this Markov chain is usually not irreducible but if we set for some c ∈ (0, 1], the Markov chain with transition matrix P will be irreducible.An additional benefit of this modification is that the spectral gap of P is at least c [HK03].The leading (Perron-Frobenius) eigenvector u of this matrix is called the Pagerank vector [PBMW98], its coefficients u i measure the stationary probability of page i being visited by a random surfer driven by the transition matrix P , hence reflect the importance of page i according to this model.
The coefficients of pagerank vectors typically follow a power law for classic values of the damping factor [PRU06,BC06] which means that the bounds in assumption 1 do not hold.Empirically however, while the distance between true and averaged eigenvectors quickly gets large, the ranking correlation (measured using Spearman's ρ [Mel07]) is surprisingly robust to subsampling.
We use two graphs from the Webgraph database [BV04], wb-cs.stanfordwhich has 9914 nodes and 36854 edges, and cnr-2000 which has 325,557 nodes and 3,216,152 edges.For each graph, we form the transition matrix P as in [GZB04] with uniform teleportation probability and set the teleportation coefficient c = 0.85.In Figure 6 we plot the wb-cs.stanfordgraph and the Pagerank vector for cnr-2000 in loglog scale.In Figure 7 we plot the ranking correlation (Spearman's ρ) between true and averaged Pagerank vector (over 1000 samples), the median value of the correlation over all subsampled matrices and the proportion of samples satisfying the perturbation condition (9), for various values of the sampling probability p.We notice that averaging very significantly improves ranking correlation, far outside the perturbation regime.
Conclusion
We have proposed a method to compute the eigenvectors of very large matrices in a distributed fashion: 1.To each node in a computer cluster of size N , we send a subsampled version S i of the matrix of interest, M .
2. Node i computes the relevant eigenvectors of S i .
3. The N eigenvectors are averaged together and normalized to produce our final estimator.
The key to the algorithm is that Step 2 is numerically cheap (because S i is very sparse), and hence can be executed fast even on small machines.Therefore a cluster or cloud of small machines could be used to approximate the eigenvectors of M , a difficult problem in general when M is extremely large.
We have shown that under carefully stated conditions, the algorithm described above will yield a secondorder accurate approximation of the eigenvectors of M .This gain in accuracy comes from the averaging step of our algorithm.We note that arguments similar to the ones we used in this paper could be made to compute second-order accurate approximations of the eigenvalues of M .(We restricted ourselves to eigenvectors here because in methods such as PCA, the eigenvectors are in some sense more important than the eigenvalues.)Our results depend on a measure of incoherence for M that we propose in this paper.They also show that subsampling will work if the sampling probability is small, but is likely to fail if that probability is too small.Finally, our simulations show that we gain significantly in accuracy by averaging subsampled eigenvectors (which suggests that our theoretical passage from first-order to second-order accuracy is also relevant in practice) and that the performance of our method seems to degrade for very incoherent matrices, a result that is also in line with our theoretical predictions.
A Appendix
A-1 On C 2 Let us consider the symmetric random matrix C with entries distributed as, for i ≥ j, We assume that C is n × n.Our aim is to show that we can control C 2 and in particular its deviation around its median.We do so by using Talagrand's inequality.
We have the following theorem.
Theorem A-1.Suppose that we observe n matrices C α i , for 1 ≤ i ≤ n with entries distributed as those of the matrix C just described.Suppose these matrices are of size n α i , where α i are positive numbers.Call α min = min 1≤i≤n α i and assume that, for some fixed δ > 0, α min > (log n) (δ−3)/4 .Suppose further that p is such that lim n→∞ (α min log n) 4 /(n α min p) = 0. Then Proof.We note that the application C → C 2 is a convex, √ 2-Lipschitz (with respect to Euclidian/Frobenius norm) function of the entries of C that are on or above the main diagonal.As a matter of fact, since • is a norm, it is convex.Furthermore, if A and B are two symmetric matrices, Now recall the consequence of Talagrand's inequality [Tal95] spelled out in [Led01], Corollary 4.10 and Equation (4.10): if F is a convex, 1-Lipschitz function (with respect to Euclidian norm) on R n , of n independent random variables (X 1 , . . ., X n ) that take value in [u, v], and if m F is a median of F (X 1 , . . ., X n ), then The random variables that are above the main diagonal of C are bounded, and take value in Therefore, calling m n the median of n −1/2 C 2 , we have, in light of Equation (A-3), Suppose now that we have a collection C α i of matrices of size n α i with entries distributed as in Equation (A-1).(Note that the matrices could be dependent.)Let us call m n α i the medians of C α i /n α i /2 2 .Then we have, by a simple union bound argument, for any k, where α min = min 1≤i≤k α i .Suppose now that k = n, p ≤ 1/2, pn α min > (log n) 1+δ , and t ≥ (log n) −δ/3 for some δ > 0.Then, t 2 p(1 − p)n α min > (log n) 1+δ/3 /2, which tends to ∞ as n → ∞.Because u n = n exp(−(log n) 1+δ/3 /16) is the general term of a converging series, we have, when p ≤ 1/2 and pn α min > (log n) 1+δ for some δ > 0, max 1≤i≤n by a simple application of the Borel-Cantelli lemma.Hence, we have Now all we have to do is control max 1≤i≤n m n α i , which is the maximum of a deterministic sequence.Recall Vu's Theorem 1.4 in [Vu07], applied to our situation where we are dealing with bounded random variables with mean 0 and variance 1: if the matrix C has entries as above and is n × n, then almost surely, for some constant κ 0 .So as soon as (log n) 4 /(pn) remains bounded, so does m n , the median of (Note that this is true because we are taking the maximum of elements of a fixed deterministic sequence that is asymptotically less than or equal to 2 + ε, for any ε and the smallest argument is going to infinity.All the work using Talagrand's inequality was done to allow us to switch from having to control the maximum of a random sequence to that of a deterministic sequence.)Now when (α min log n) 4 /(pn α min ) → 0, we have a fortiori pn α min > (log n) 1+δ when α min > (log n) (δ−3)/4 .So we conclude that when (α min log n) 4 /(pn α min ) → 0 and α min > (log n) Let us now consider the related issue of understanding the matrix E = r p M •C, where r p = (1 − p)/p, M is a deterministic matrix and C is a random matrix as above.
Theorem A-2.Suppose E = r p M • C, where C is a symmetric random matrix distributed as above, M is a deterministic matrix and r p = (1 − p)/p.Let us call m E a median of E 2 .Then we have Hence, in particular, and Proof.The crux of the proof is quite similar to that of Theorem A-1: we will rely on Talagrand's concentration inequality for convex 1-Lipschitz functions of bounded random variables.To do so let us consider the map: This map f is convex as the composition of a norm with an affine mapping.
Let us now show that it is ( √ 2 M ∞ )-Lipschitz with respect to Euclidian norm: if we denote by c (k) i,j the (i, j)-th entry of the matrix C k , we have Hence, f is indeed a ( √ 2 M ∞ )-Lipschitz function of the entries of C that are above or on the diagonal.Now the function of C we care about is g(•) = r p f (•), which is convex and √ 2 M ∞ r p -Lipschitz.Given that the entries of C are bounded, we have, as in the proof of Theorem A-1, Now using the proof of Proposition 1.9 in [Led01] (see p.12 of this book), we conclude that , and Therefore, since for a and b positive, More generally, we see, using essentially Proposition 1.10 in [Led01] and elementary properties of the Gamma function, that if the random variable F is such that for a deterministic number a F , P Applying this result with k = 3, we get In our context, using the fact that, for positive a and b, (a + b) 3 ≤ 4(a 3 + b 3 ) by convexity, we also have
A-2 Regularized eigenvector considerations
We now have the following (regularized) second order accuracy result, which is a critical component of the proof of Theorem 3, one of the main results of the paper.
Theorem A-3.Suppose that the assumptions of Theorem 1 are satisfied.We consider the approximation of u the eigenvector associated with the largest eigenvalue of M .Recall that v is the eigenvector corresponding to the leading eigenvalue of the subsampled matrix S. For ε > 0, we call ṽε the vector such that Then, for any η > 0, we have asymptotically, Suppose further that we are in an asymptotic setting where 1 Proof.Let us first show that our regularization does not change the vector we are dealing with with highprobability.ṽε = v as long as (pn α min ) 1/2 → 0 and we have according to Theorem A-2 E 2 ≤ 2 µ (pn α min ) 1/2 with high-probability, we conclude that with high-probability, ṽε = v.
Using Equation (8) with j = 1, we see that, since Recall that by construction E[E] = 0. Hence, since R is a fixed deterministic matrix and u is a deterministic vector, So, if we now use the fact that u = 1, we have Let us now show that we can control the right-hand side of the previous equation.We prove in Theorem A-2 that where m E is a median of the random variable E 2 .Our asymptotic control of E 2 in (5) gives allows us to control m E , namely, In other respects, we clearly have M ∞ ≤ n i=1 λ i u i 2 ∞ , and hence since we are in a setting where Furthermore, we prove in Theorem A-2 that At the end of Subsection 2.2, we mentioned a corollary (see below) of the following theorem: Theorem A-4.Suppose that p = (log n) 1−δ u n /n, for a fixed δ in (0, 1) and for a fixed κ, 0 < u n ≤ κ.
Suppose further that we can find
Recall that practically, this theorem suggests that if we don't sample enough the matrix M (i.e p is too small), a subsampling approximation to its eigenproperties is not likely to work.Let us now prove it.
Proof.Our strategy is to show that the largest diagonal entry of C T C/n goes to infinity.To do so, we will rely on results in random graph theory.Let us examine more closely this diagonal.Using the definition of C, we see that, if T = C T C, and d i is the number of times (1 − p)/p appears in the i-th column of C, Now {d i } is the degree sequence of an Erdös-Renyi random graph.According to [Bol01], Theorem 3.1, if k is such that n n−1 p p k (1 − p) n−1−k → ∞, then, if X k is the number of vertices with degree greater than k, lim n→∞ P (X k ≥ t) = 1 , for any t.So if we can exhibit such a k, then max d i ≥ k with probability going to 1.We now note that for small p, Hence, if our k is also such that k/pn → ∞, we will indeed have and the theorem will be proved.
• h/(pn) = v n = o (log n) by assumption.We can finally conclude that max i T (i, i)/n ≥ k 2np with probability going to 1 .
But because v n → ∞, we have k/(2np) → ∞ and the theorem is proved.
We have the following corollary to which we appealed in Subsection 2.2.
The previous corollary follows immediately from Theorem A-4, by noticing that u n is lower bounded under our assumptions and by taking v n = (log n) δ/5 .
A-4 Variance computations
We provide some details here to complement the explanations we gave in the proof of Theorem 4 in Subsection 2.6.
On E[E 2 ] Let us explain why this matrix is diagonal and compute the coefficients on the diagonal.Recall that E = (1 − p)/pM • C, where C is a random matrix whose above-diagonal elements are independent, have mean 0 and variance 1. E is naturally symmetric and we call E i its i-th column.Naturally, E 2 (i, j) = E T i E j .Suppose first that i = j.The elements of E i and E j are independent, except for E ij and E ji , which are equal.In particular, E ki and E kj are independent for all 1 ≤ k ≤ n.Recall also that E[C] = 0, so E[E] = 0. Combining all these elements, we conclude that, if i = j, Therefore E[E 2 ] is diagonal.Let us now turn our attention to computing the elements of the diagonal.This is simple since We note that this is the result we announced in the proof of Theorem 4 in Subsection 2.6.
On var(u T Eu) Rewriting this quantity as a sum of independent quantities greatly simplifies the computation.If we pursue this route, we have Because the previous expression is a sum of independent random variables, we immediately conclude that Calling w = u • u and M = M • M , we immediately recognize in the last expression the quantity as announced in the proof of Theorem 4.
Figure 2 :
Figure2: Left: Alignment u T v between the true and the normalized average of 1000 subsampled eigenvectors (blue circles), median value of u T v over all sampled matrices (solid black line), with dotted lines at plus and minus one standard deviation and proportion of samples satisfying the perturbation condition (9) (dashed red line), for various values of the sampling probability p on a gene expression covariance matrix.Right: Zoom on the the interval p ∈ [10 −2 , 1].
Figure 3 :Figure 4 :
Figure3: Left: Alignment u T v between the true and the normalized average of 1000 subsampled left eigenvectors (blue circles), median value (solid black line), dotted lines at plus and minus one standard deviation and proportion of samples satisfying condition (9) (dashed red line), for various values of the sampling probability p on a term document matrix with dimensions 6779 × 11171.Right: Speedup in computing leading eigenvectors on gene expression data, for various values of the sampling probability p.
Figure 5 :Figure 6 :
Figure5: Left: Alignment u T v between the true leading eigenvector u and the normalized average leading eigenvector versus number of samples, on the gene expression covariance matrix with subsampling probability p = 10 −2 .Right: Alignment u T v for various values of the spectral gap λ 2 /λ 1 ∈ {0.75, 0.95, 0.99}.
Figure 7 :
Figure7: Ranking correlation (Spearman's ρ) between true and averaged pagerank vector (blue circles), median value of the correlation over all subsampled matrices (solid black line), dotted lines at plus and minus one standard deviation and proportion of samples satisfying the perturbation condition (9) (dashed red line), for various values of the sampling probability p. Left: On the wb-cs.stanfordgraph.Right: On the cnr-2000 graph. | 10,539.2 | 2009-08-02T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Anterior Cervical and Upper Thoracic Column Reconstruction Using an Expandable Poly-Ether-Ether-Ketone Vertebral Body Replacement: A Retrospective Single Center Cohort Analysis
: This study aimed to evaluate the safety and efficacy of a novel Poly-Ether-Ether-Ketone (PEEK) expandable vertebral body replacement (VBR) for anterior cervico-thoracic vertebral column reconstruction in patients with metastatic, traumatic, or degenerative diseases. Radiographic and clinical outcomes, as well as complication rates, were analyzed in a retrospective analysis of 28 patients (61 ± 13 years; 64% female) who underwent an anterior cervical corpectomy and fusion (ACCF) with the Expandable Corpectomy Device (ECD) from DePuy/Synthes (2011–2020). Correction of the bisegmental kyphotic angle (BKA) was chosen as the primary outcome. Bony fusion, loss of device height, and implant subsidence were evaluated additionally. Clinical outcome was assessed using Odom’s criteria, the numerical pain rating scale (NRS), the American Spinal Injury Association Impairment Scale (AIS), and the Karnofsky Performance Status Scale (KPSS). Our study found a significant improvement in the BKA (12.3 ◦ ± 9.6 ◦ ; p = 0.0002) at the last follow-up with no statistically relevant loss of device height ( p = 0.96) or implant subsidence ( p = 0.99). Successful bony fusion was observed in all patients. The KPSS significantly improved in patients with a tumorous disease at the time of discharge ( p = 0.0009), and the sensation of pain showed significant improvement at six months post-operatively and at the final follow-up ( p = 0.004; p = 0.021). However, four patients needed further secondary posterior stabilization, and one ECD was explanted due to a severe surgical site infection after an accidental esophageal lesion. In conclusion, the ECD proofed the radiographic stability for the anterior column reconstruction of the cervico-thoracic spine with significantly improved clinical outcome.
Introduction
The reconstruction and stabilization of the anterior cervico-thoracic spinal column have remained challenging since Denis's first description of the three-column model in 1983 [1].After Robinson and Smith first described the anterior approach to the cervical spine in 1958, the anterior cervical corpectomy and fusion procedure (ACCF) was established as a favorable and safe surgical procedure to stabilize the anterior cervical and upper thoracic spinal column [2].An ACCF with a vertebral body replacement device (VBR) shows reliable spinal alignment, stability, and clinical outcomes in patients suffering from neoplasms, degenerative diseases, or trauma [3,4].In former years, non-expandable and expandable VBRs replaced traditionally used bony auto-or allografts, which are accompanied by non-negligible morbidity at the donor site and higher rates of pseudoarthrosis [5][6][7].
In recent years, implants made of Poly-Ether-Ether-Ketone (PEEK), an organic polymer thermoplastic, gained popularity in spine surgery [8].PEEK is chemically inert and resistant to thermal, chemical, and post-irradiation degradation [9].It is not cytotoxic and even increases the amount of osteoblast cell protein, which stimulates bone formation around the implant [10,11].Furthermore, intervertebral PEEK implants exhibit similar rates of bony fusion when compared to those of titanium implants [12].Being radiolucent, PEEK allows for good visualization of bony structures in post-operative imaging, the assessment of fusion, precise CT planning for irradiation, and reliable radiographic follow-ups for tumorous diseases [9,13,14].Moreover, PEEK has an elastic module comparable to that of spongious bone, allowing for better stress distribution across the implant-adjacent vertebral bodies and less implant subsidence [13,15].
The first study reporting on non-expandable PEEK-VBRs was published in 2014 [16].Since then, expandable PEEK-VBRs such as the Expandable Corpectomy Device (ECD) by DePuy/Synthes have been developed to provide personalized individual adaptation to the anatomical structures [17].However, only limited clinical and radiological data demonstrate the safety and efficacy of the PEEK ECD and comparable devices in restoring and maintaining the cervico-thoracic anterior spine as well as peri-and post-operative complications.Expandable cervical cages offer the advantage of reducing the risk associated with intraoperative cage insertion, thereby minimizing the likelihood of damaging the endplate [6].Furthermore, expandable cages show a significantly lower subsidence rate while concurrently demonstrating an improved correction of cervical lordosis compared to non-expandable cases [18].This is particularly advantageous in patients with compromised bone quality, such as those with metastatic tumors or osteoporosis.An issue associated with the utilization of expandable cages is the potential for a reduced fusion rate compared to both structural bone grafts and hollow cages due to less space for bone formation inside the implant [6,18].Additionally, expandable cages show a heightened displacement rate, especially evident in multi-level corpectomy cases, consequently resulting in an elevated incidence of reoperation compared to non-expandable VBRs [18,19].
Therefore, this study aimed to assess the safety and efficacy of the ECD-VBR by analyzing (1) radiographic and (2) clinical outcomes, (3) as well as complication rates in patients with metastatic diseases, spinal fractures, and progressive degenerative disease needing treatment with an ACCF.
Materials and Methods
A single-center, retrospective cohort study was performed after the institutional Review Board gave permission (KEK 2020-01127).All patients who underwent ACCF with the ECD cage (DePuy Synthes Spine Inc., Raynham, MA, USA) in our institution from January 2011 until June 2020 were included [17].Exclusion criteria comprised patients younger than 18 years, the dissent of the general consent, and anterior reconstructions exceeding three adjacent vertebral bodies.A minimum follow-up of six months was determined.Twenty-eight patients were included for perioperative complication assessment and clinical outcome analysis.After surgery, eight patients passed away before completing the six months follow-up due to an underlying tumor disease.Two patients were lost to follow-up.In one case, the ECD had to be replaced due to a vast early onset surgical site infection (SSI) after the initial surgery.Thus, 17 patients were included in the outcome analysis with a mean follow-up of 1.0 [0.6; 1.8] (range 0.5-6.0)years.The patient selection process is outlined in Figure 1.
Surgical Technique
The ECD (Figure 2) was implanted in all cases in combination with a titanium (n = 14) or carbon (n = 14) anterior plate-screw osteosynthesis, using the standard anterior Smith and Robinson approach [2].After the complete removal of the adjacent intervertebral discs, a corpectomy was performed at the level of interest.We used the sliding caliper to measure the approximate size of the implant.The chosen ECD was attached to the "holding and distraction" instrument to align it in situ in the frontal and sagittal planes.The ECD was expanded to the desired physiologic height followed by installing the locking clip of the expansion mechanism.Allograft chips were placed anterior and lateral to the cage.No chips were placed between the cage and the spinal cord.Additionally, a posterior spondylodesis was mandatory in 12 (42.9%)cases due to multi-level corpectomy or affection of the cervico-thoracic junction.Post-operative care comprised a soft neck collar for six weeks and pain-adapted analgesia.
Surgical Technique
The ECD (Figure 2) was implanted in all cases in combination with a titanium (n = 14) or carbon (n = 14) anterior plate-screw osteosynthesis, using the standard anterior Smith and Robinson approach [2].After the complete removal of the adjacent intervertebral discs, a corpectomy was performed at the level of interest.We used the sliding caliper to measure the approximate size of the implant.The chosen ECD was attached to the "holding and distraction" instrument to align it in situ in the frontal and sagittal planes.The ECD was expanded to the desired physiologic height followed by installing the locking clip of the expansion mechanism.Allograft chips were placed anterior and lateral to the cage.No chips were placed between the cage and the spinal cord.Additionally, a posterior spondylodesis was mandatory in 12 (42.9%)cases due to multi-level corpectomy or affection of the cervico-thoracic junction.Post-operative care comprised a soft neck collar for six weeks and pain-adapted analgesia.
Radiologic Outcome
The Sectra Workstation IDS7 software (Version 23.2, Sectra AB, Linköping, Sweden) was used to conduct radiologic assessment and measurements on standing lateral cervico-thoracic spine radiographs.If the conventional radiograph was not suited for exact measurements, CT or MRI images were used to perform measurements if available.
As the primary outcome, the adjustment of local kyphosis after surgery was assessed using the "bisegmental kyphotic angle (BKA)" [21].The BKA is the angle between the caudal endplate of the caudal adjacent vertebral body and the cranial endplate of the supra-adjacent vertebral body (Figure 3).Global lordosis of the cervical spine was determined by measuring the "sagittal alignment of the cervical spine (SACS)" defined by the angle between the lower endplate of C2 and the lower endplate of C7 on a lateral radiograph [22].If the ECD was in position C7 or below, the caudal endplate of the caudal adjacent healthy vertebra was used.
Radiologic Outcome
The Sectra Workstation IDS7 software (Version 23.2, Sectra AB, Linköping, Sweden) was used to conduct radiologic assessment and measurements on standing lateral cervicothoracic spine radiographs.If the conventional radiograph was not suited for exact measurements, CT or MRI images were used to perform measurements if available.
As the primary outcome, the adjustment of local kyphosis after surgery was assessed using the "bisegmental kyphotic angle (BKA)" [21].The BKA is the angle between the caudal endplate of the caudal adjacent vertebral body and the cranial endplate of the supraadjacent vertebral body (Figure 3).Global lordosis of the cervical spine was determined by measuring the "sagittal alignment of the cervical spine (SACS)" defined by the angle between the lower endplate of C2 and the lower endplate of C7 on a lateral radiograph [22].If the ECD was in position C7 or below, the caudal endplate of the caudal adjacent healthy vertebra was used.
The "height coefficient (HC)", as published before [21], was used to assess the stability of the ECD's expansion mechanism (Figure 4a).The height coefficient difference (∆HC) was calculated by subtracting the height coefficient at the given follow-up from the height coefficient early post-operative.Subsidence of the ECD into the adjacent vertebral endplates was evaluated using the subsidence coefficient (SC), as proposed by Schnake et al. (Figure 4b) [23].The subsidence coefficient difference (∆SC) was calculated analogously to ∆HC.In addition, correct cage placement (>90% of the ECD has contact with the adjacent vertebral endplates) was evaluated early post-operatively.The "height coefficient (HC)", as published before [21], was used to assess the stability of the ECD's expansion mechanism (Figure 4a).The height coefficient difference (∆HC) was calculated by subtracting the height coefficient at the given follow-up from the height coefficient early post-operative.Subsidence of the ECD into the adjacent vertebral endplates was evaluated using the subsidence coefficient (SC), as proposed by Schnake et al. (Figure 4b) [23].The subsidence coefficient difference (∆SC) was calculated analogously to ∆HC.In addition, correct cage placement (>90% of the ECD has contact with the adjacent vertebral endplates) was evaluated early post-operatively.The "height coefficient (HC)", as published before [21], was used to assess the stability of the ECD's expansion mechanism (Figure 4a).The height coefficient difference (∆HC) was calculated by subtracting the height coefficient at the given follow-up from the height coefficient early post-operative.Subsidence of the ECD into the adjacent vertebral endplates was evaluated using the subsidence coefficient (SC), as proposed by Schnake et al. (Figure 4b) [23].The subsidence coefficient difference (∆SC) was calculated analogously to ∆HC.In addition, correct cage placement (>90% of the ECD has contact with the adjacent vertebral endplates) was evaluated early post-operatively.Bony fusion evaluation was performed at six months and the final follow-up, respectively, using the grading system by Eck et al. [24].Visible trabeculae and remodeling on a lateral radiograph between the ECD and adjoining vertebral bodies is defined as grade I (definite fusion), whereas a not completely incorporated vertebral body replacement device without visible lucency > 1 mm is defined as grade II (probable fusion).With lucency at the bottom and/or top of the VBR, the fusion was defined as grade III (probably not).Therefore, grade I and grade II were considered successful fusion.
Clinical Outcome
The primary clinical outcome was quantified by evaluating the patient's symptoms preand post-operation using Odom's criteria [25].The absence of all pre-operative symptoms characterizes an "excellent" outcome; a "good" outcome is defined by minimal persistence of pre-operative symptoms; and a "fair" outcome signifies relief from some pre-operative symptoms.Conversely, unchanged or worsened symptoms represent a "poor" outcome.A Numerical Rating Scale (NRS) ranging from zero to ten assessed experienced level of cervical pain.
The American Spinal Injury Association Impairment Scale (AIS) was used to characterize neurologic deterioration [26].
The Karnofsky Performance Status Scale (KPSS) was determined to assess the quality of life and the level of mobility in patients with underlying carcinosis [27].In addition, before surgery, the SINS was determined in those patients [20].
Complication Assessment
Intraoperative complications were distinguished as complications associated with the ECD (e.g., a faulty device, no expansion possible, expansion mechanism not lockable) and general complications (e.g., difficulty with intubation, high blood loss (≥1000 mL for single-stage; ≥1500 mL two-or three-stage procedures), with severe circulatory problems resulting in prolonged surgical time or abortion of the procedure).
Statistical Analysis
Statistical analysis was performed with GraphPad PRISM Version 8.0.1 (GraphPad Software, San Diego, CA, USA).Shapiro-Wilk test was used to test normal Gaussian distribution.Descriptive statistics are shown as mean ± standard deviation (range minimummaximum) if normally distributed.If non-normally distributed, descriptive statistics are shown with median [Interquartile range (IQR) 25% percentile; 75% percentile] (range minimum-maximum).Single-group comparisons were performed using paired (unpaired) t-tests for normally distributed data and Wilcoxon signed-rank test (Mann-Whitney U-Test) for non-normally distributed data.One-way ANOVA was conducted for paired multiple comparisons with complete data and a mixed-effects analysis in case of missing data points.Statistical significance was set at α ≤ 0.05.
Surgical Details
A summary of surgical details is given in Table 2. C7 was replaced most frequently (n = 10, 35.7%).All patients (n = 28) received an anterior plate-screw osteosynthesis, covering two segments in most cases (50%).Twelve (42.9%) patients received an additional posterior spondylodesis during the initial surgery.During follow-up, a secondary stabilization was necessary in four (14.3%) other patients (median 27 (range: 21-303) days after the initial surgery).
Seventeen patients were evaluated for radiographic fusion at the final follow-up on average 1.0 [0.6; 1.8] (range 0.5-6.0)years post-operatively.Grade I "definite fusion" was ∆HC (Figure 5b) showed no statistically relevant differences at the last follow-up (p = 0.96).The mean value of 0.02 ± 0.9 (range: −0.09-0.23)at the final follow-up indicates no failure of the expansion system or the stability of the ECD.At the last follow-up, the ∆SC (Figure 5c) showed no differences in subsidence (0.12 ± 0.16 (range: −0.19-0.48);p = 0.99).
Seventeen patients were evaluated for radiographic fusion at the final follow-up on average 1.0 [0.6; 1.8] (range 0.5-6.0)years post-operatively.Grade I "definite fusion" was noted in nine patients (52.9%) and grade II "probable fusion" in eight patients (47.1%) [24].There was no grade III or grade IV present.A correct ECD placement, assessed on early post-operative radiographs, was found in all patients (n = 28, 100%) and did not differ over time.
Clinical Outcomes
Table 3 demonstrates the main clinical outcomes.Directly after surgery, the majority of patients reported an "excellent" (n = 3; 10.7%) or "good" (n = 16; 57.1%) outcome.At the final follow-up, most patients showed an "excellent" (n = 8; 50%) or "good" (n = 6; 37.5%) and no "poor" outcome.Experienced pain decreased significantly from an average of 4.5 ± 2.3 (range: 0-9) pre-operatively to 2.4 ± 2.0 (range: 0-6) at discharge (p = 0.009) and 1.8 ± 1.2 (range: 0-4; p = 0.004) at six months post-operatively (Figure 6).At the final followup, patients experienced a mean NRS of 2.34 ± 2.8 (range: 0-10), showing a stable clinical situation compared to the post-operative state (p = 0.066) and a significantly improved pain relief compared to the pre-operative status (p = 0.021).Furthermore, 20 patients (71.4%) showed an AIS grade of E post-operatively compared to 16 (59.3%)pre-operatively.Before surgery, two patients (7.4%) showed a grade of C, one improved to D early post-operatively, and one improved to a grade of E until the final follow-up.One patient's pre-operative AIS grade of D remained constant during follow-up.One trauma patient (tetraplegia after a fall from a tree) showed a grade of A pre-operatively, which remained unchanged after surgery; the patient was lost to follow-up due to further care in a paraplegic center.Five (17.9%) of all the patients improved in at least one AIS grade directly after surgery; the grade did not worsen over time.A total of 93.8% (n = 15) of the examined patients at the final follow-up showed an AIS grade of E, except one (6.3%)with a grade of D. An average KPSS of 75.8 ± 13.9 (range: 50-100) pre-operatively (n = 19) improved significantly (p < 0.001) to 62.2 ± 10 (range: 40-80) at discharge.The mean KPSS at the final follow-up was 74.5 ± 19.2 (range: 30-100), resulting in a stable score compared to that post-operatively (p = 0.28).Except for two patients, all tumor patients (n = 17; 89.5%) were treated with either post-operative irradiation or adjuvant chemo-or systemic therapies.Throughout this study, 14 (50.0%)patients passed away on average 1.0 ± 1.1 years (range: 0.05-3.7)after initial surgery with a mean age of 65.6 ± 10.6 years (range: 46-86); all suffered from carcinosis.An average KPSS of 75.8 ± 13.9 (range: 50-100) pre-operatively (n = 19) improved significantly (p < 0.001) to 62.2 ± 10 (range: 40-80) at discharge.The mean KPSS at the final follow-up was 74.5 ± 19.2 (range: 30-100), resulting in a stable score compared to that post-operatively (p = 0.28).Except for two patients, all tumor patients (n = 17; 89.5%) were treated with either post-operative irradiation or adjuvant chemo-or systemic therapies.Throughout this study, 14 (50.0%)patients passed away on average 1.0 ± 1.1 years (range: 0.05-3.7)after initial surgery with a mean age of 65.6 ± 10.6 years (range: 46-86); all suffered from carcinosis.
Complications
No intraoperative complications related to the ECD occurred.The mean blood loss was 451 ± 420 (40-1700) milliliters.Eight (28.6%) patients experienced approach-specific post-operative complications; 91% of those were minor complications, with odynophagia being the most common (n = 4), followed by dysphagia (n = 3) and hoarseness (n = 3).The minor complications resolved without reoperation and recovered on average after 6.8 ± 6.9 (2-23) days.In one case, an esophageal perforation occurred, requiring multiple revision surgeries.Due to bacterial colonization, the ECD had to be replaced with a titanium-VBR; a posterior spondylodesis was added.After ECD explantation, the patient dropped out of our series.
Five (17.9%) patients required reoperation.Three of those patients suffered from SSI.Two of the infections occurred with the additional posterior spondylodesis and one with the anterior approach.In none of those cases was it necessary to remove the implants.In two cases, wound revision was indicated due to wound healing disorder with the posterior approach.
Complications
No intraoperative complications related to the ECD occurred.The mean blood loss was 451 ± 420 (40-1700) milliliters.Eight (28.6%) patients experienced approach-specific post-operative complications; 91% of those were minor complications, with odynophagia being the most common (n = 4), followed by dysphagia (n = 3) and hoarseness (n = 3).The minor complications resolved without reoperation and recovered on average after 6.8 ± 6.9 (2-23) days.In one case, an esophageal perforation occurred, requiring multiple revision surgeries.Due to bacterial colonization, the ECD had to be replaced with a titanium-VBR; a posterior spondylodesis was added.After ECD explantation, the patient dropped out of our series.
Five (17.9%) patients required reoperation.Three of those patients suffered from SSI.Two of the infections occurred with the additional posterior spondylodesis and one with the anterior approach.In none of those cases was it necessary to remove the implants.In two cases, wound revision was indicated due to wound healing disorder with the posterior approach.
In no case did an ECD have to be replaced due to insufficient stability caused by the ECD or the malfunctioning of the expansion system.
Discussion
An ACCF has been established as a favorable and safe surgical procedure to stabilize the anterior cervical spinal column.Several stabilization systems for an ACCF have been developed.Recently, the ECD cage was introduced, providing personalized individual adaptation, better stress distribution, and radiolucency compared to previous systems.This study aimed to assess the general safety and efficacy of the ECD by analyzing (1) the radiographic outcome, (2) the clinical outcome, and (3) the complication rates in patients suffering from metastatic diseases, traumatic cervical fractures, or progressive degenerative disease.
Our study indicates that the ECD is a valuable tool to realign and maintain local cervico-thoracic alignment and anterior stability in patients who have undergone cervico-a thoracic anterior corpectomy.The BKA improved from a pre-operative mean of 5.9 ± 13.3 degrees kyphosis to a mean of 7.5 ± 7.8 degrees lordosis directly post-operatively (p < 0.001).There was a deniable loss of correction of 1.2 ± 2.5 degrees at the last follow-up compared to that directly post-operatively (p = 0.17).The same applies to the SACS, demonstrating recovery and conserving a physiological lordotic posture [28].Furthermore, all patients with a follow-up of at least six months showed a successful bony fusion, therefore achieving the optimal purpose of a VBR [29].Furthermore, we proved the safety of the expansion and locking mechanism of the ECD.A collapse of the VBR was not recorded, and ∆HC showed no substantial differences at the final follow-up (p = 0.96).The same applies to the subsidence.At the final followup, the mean ∆SC only showed subsidence of 0.12 ± 0.16 (p = 0.99).As proven before and according to the corresponding literature, minor and even statistically significant subsidence has no clinical relevance in those cases [30], an assertion supported by our predominantly good and excellent clinical results according to Odom's criteria.One patient showed a ∆SC of 0.48 at the final follow-up while being the only patient to have a one-level decrease in Odom's criteria at the last follow-up compared to the previous evaluation ("Excellent" 16 months, "Good" six years post-operative).Several studies report on VBR or cage subsidence risk factors, such as age, female sex, multi-level ACCF, and a C6 corpectomy [30][31][32][33].Our patient (71 years, female) had multiple risk factors for VBR subsidence.Besides being female and elderly, she suffered from rheumatoid arthritis and received a bi-level ACCF (C4-C7).Consequently, we can conclude that the subsidence was caused by an accumulation of risk factors rather than the ECD or its design [30][31][32][33].A failure of the expansion mechanism was excluded in this case.
The clinical outcomes showed that an ACCF with an ECD leads to cervical pain relief and neurological improvement compared to the pre-operative state.Directly after surgery, 92.9% (n = 26) of patients reported at least a "relief of some pre-operative symptoms"; only two patients (7.1%) reported a "poor" outcome according to Odom's criteria [25].Both were still experiencing unchanged cervical pain; one patient had already suffered from chronic cervicobrachial pain before surgery.Experienced pain measured with the NRS was greatly relieved at discharge from the hospital (p = 0.009) compared to the preoperative baseline.It remained substantially lowered until the last follow-up (p = 0.066) and remained significantly improved compared to the pre-operative state (p = 0.021).Functional neurologic impairments, measured by the AIS grade, improved in 17.9% of all patients early post-operatively (mean 6.3 ± 2.9 (range: 2-12) days after surgery).Fifty-nine percent of all patients reported no neurological impairment (AIS E) pre-operatively.In no patient neurological did impairment worsen after surgery compared to the pre-operative status.The promising clinical results are comparable to other studies using allo-or autograft titanium mesh cages, or titanium VBRs for ACCF [3,4,34,35].
Surgical tumor debulking, coupled with post-operative irradiation and chemo-or systemic therapy, is crucial in tumor control of spinal metastases [36,37].Our data, measured with the KPSS, show an improvement in the general well-being of tumor patients who survived at least six months after surgery.In our study population, the KPSS improved by a mean of 8.2 points on the scale to 84 ± 8.4 (range: 70-100) at six months post-operatively compared to the pre-operative score.The natural course of carcinosis explains a decline in the KPSS at the final follow-up [36].In contrast to titanium mesh cages or titanium VBRs, the ECD allows for optimal pre-irradiation CT planning and monitoring of tumor progression by MRI or CT owing to the radiolucency of PEEK, producing no significant artifacts while providing similar biomechanical characteristics to those of titanium implants [38,39].An MRI example is given in Figure 7.
No device-related intraoperative complications occurred.One patient experienced a secondarily diagnosed esophageal perforation.The early vast infection was solved by a series of debridement and suture of the lesion.Finally, a replacement of the ECD cage was necessary.
Patients (n = 7) with approach-specific complications (odynophagia, dysphagia, hoarseness) had the symptoms recovered at discharge.Without aftermath, general post-operative complications were resolved by revision surgeries or blood transfusion.Due to progressive instability, four patients received a posterior spondylodesis in a revision procedure (median 27 [22.5;99] (range: 21-303) days after the initial surgery).Three of those were suffering from cervical metastases in which a sole ACCF did not provide sufficient stability due to poor bone quality of the adjacent vertebral bodies.The remaining patient experienced an esophageal perforation requiring multiple revision surgeries including posterior stabi-lization as described above.In summary, the surgical approach or the underlying disease rather than the implant caused post-operative complications.According to a meta-analysis by Wang et al., the incidence of complications in our study population is within the usual range for an ACCF.Wang et al. described an incidence of dysphagia after an ACCF of 9.9% (4.8-15.9%)and that of infection of 14.2% (−1.1-30.3%).In our population, the incidence of dysphagia was 10.7% (n = 3) and 10.7% for infection (n = 3), respectively [40].
Surg.Tech.Dev.2024, 13, FOR PEER REVIEW 11 measured with the KPSS, show an improvement in the general well-being of tumor patients who survived at least six months after surgery.In our study population, the KPSS improved by a mean of 8.2 points on the scale to 84 ± 8.4 (range: 70-100) at six months post-operatively compared to the pre-operative score.The natural course of carcinosis explains a decline in the KPSS at the final follow-up [36].In contrast to titanium mesh cages or titanium VBRs, the ECD allows for optimal pre-irradiation CT planning and monitoring of tumor progression by MRI or CT owing to the radiolucency of PEEK, producing no significant artifacts while providing similar biomechanical characteristics to those of titanium implants [38,39].An MRI example is given in Figure 7.No device-related intraoperative complications occurred.One patient experienced a secondarily diagnosed esophageal perforation.The early vast infection was solved by a series of debridement and suture of the lesion.Finally, a replacement of the ECD cage was necessary.
Patients (n = 7) with approach-specific complications (odynophagia, dysphagia, hoarseness) had the symptoms recovered at discharge.Without aftermath, general postoperative complications were resolved by revision surgeries or blood transfusion.Due to progressive instability, four patients received a posterior spondylodesis in a revision procedure (median 27 [22.5;99] (range: 21-303) days after the initial surgery).Three of those were suffering from cervical metastases in which a sole ACCF did not provide sufficient stability due to poor bone quality of the adjacent vertebral bodies.The remaining patient experienced an esophageal perforation requiring multiple revision surgeries including posterior stabilization as described above.In summary, the surgical approach or the underlying disease rather than the implant caused post-operative complications.According to a meta-analysis by Wang et al., the incidence of complications in our study population is within the usual range for an ACCF.Wang et al. described an incidence of dysphagia after an ACCF of 9.9% (4.8-15.9%)and that of infection of 14.2% (−1.1-30.3%).In our population, the incidence of dysphagia was 10.7% (n = 3) and 10.7% for infection (n = 3), respectively [40].
Limitations
The main limitation of this study is the retrospective design and a missing control group.Additionally, the small size of the study population constrained the extent of statistical analysis and the examination of risk factors.Additionally, it was unfeasible to form a study cohort of sufficient size that exclusively comprised patients who had undergone an anterior cervical corpectomy and fusion (ACCF) with the ECD and without posterior spondylodesis.This limitation rendered the analysis of isolated cases of ACCF with the ECD unachievable.Furthermore, most patients requiring cervical corpectomy suffer from severe metastatic disease.Therefore, post-operative follow-up was limited due to the restricted life expectancy of the underlying tumorous disease.As follow-up CT scans were not accessible in most cases, bony fusion assessment was performed on conventional radiographs.According to Fogel et al., fusion assessment on plain radiographs features a similar accuracy as that on CT scans if a radiolucent implant is used; our results support this assertion [41].Two patients had a CT scan at a comparable time as a that of a radiograph, showing the same bony fusion grade on both imaging procedures (Figure 8).Despite these limitations, reporting outcomes of new techniques for the surgical therapy of severe but rare cervical pathologies is essential.conventional radiographs.According to Fogel et al., fusion assessment on plain radiographs features a similar accuracy as that on CT scans if a radiolucent implant is used; our results support this assertion [41].Two patients had a CT scan at a comparable time as a that of a radiograph, showing the same bony fusion grade on both imaging procedures (Figure 8).Despite these limitations, reporting outcomes of new techniques for the surgical therapy of severe but rare cervical pathologies is essential.
Conclusions
This is the first study analyzing the radiologic and clinical outcome of the ECD PEEK cage in the cervico-thoracic spine.The study revealed that the ECD restored and maintained the anterior vertebral column in patients with metastatic, traumatic, and degenerative diseases.Furthermore, the rate and degree of complications did not exceed those in other surgical treatment modalities for these pathologies.Thus, the ECD PEEK cage is a safe and efficient therapy option for anterior vertebral column reconstruction, even though studies with greater patient cohorts or a prospective design are necessary to support this thesis.
Conclusions
This is the first study analyzing the radiologic and clinical outcome of the ECD PEEK cage in the cervico-thoracic spine.The study revealed that the ECD restored and maintained the anterior vertebral column in patients with metastatic, traumatic, and degenerative diseases.Furthermore, the rate and degree of complications did not exceed those in other surgical treatment modalities for these pathologies.Thus, the ECD PEEK cage is a safe and efficient therapy option for anterior vertebral column reconstruction, even though studies with greater patient cohorts or a prospective design are necessary to support this thesis.
Figure 2 .
Figure 2. The Expandable Corpectomy Device (DePuy Synthes Spine Inc., Raynham, MA, USA) with (a) an exemplary 3D rendering and (b) different available sizes (17-22 mm-50-70 mm) with corresponding endplate angulations (4.5°-7°).The device is entirely made of PEEK.The body contains a continuous expansion mechanism, which a locking clip secures.The arrows are pointing cranially.Each of the spiked anatomically shaped endplates has three radiopaque markers to scrutinize the device's position in situ.(Printed with permission from DePuy Synthes).
Figure 2 .
Figure 2. The Expandable Corpectomy Device (DePuy Synthes Spine Inc., Raynham, MA, USA) with (a) an exemplary 3D rendering and (b) different available sizes (17-22 mm-50-70 mm) with corresponding endplate angulations (4.5 • -7 • ).The device is entirely made of PEEK.The body contains a continuous expansion mechanism, which a locking clip secures.The arrows are pointing cranially.Each of the spiked anatomically shaped endplates has three radiopaque markers to scrutinize the device's position in situ.(Printed with permission from DePuy Synthes).
Figure 3 .
Figure 3. Measurement of the bisegmental kyphotic angle (BKA) on a lateral projection of the cervical spine in a patient suffering from multiple myeloma in C4.(a) Pre-operative BKA: −3.8 • .(b) Early post-operative BKA: +5.3 • , showing adjustment of local kyphosis after ACCF.ECD in C4, visible rectangular radiopaque markers at the cranial and caudal ECD endplates, anterior plate-screw fixation (carbon) covering C3-C5 with carbon screws (Ico-Tec AG, Altstätten, Switzerland).Kyphotic angles are indicated as a negative value (−) and lordotic ones as a positive value (+), respectively.
8 Figure 5 .
Figure 5. (a) Progression of the bisegmental kyphotic angle (BKA) over time.Negative values imply kyphosis, and positive values equal lordosis.(b) The height coefficient over time, pictured as height coefficient difference (∆HC), is calculated by subtracting the height coefficient at the given follow-up from the height coefficient early post-operative.Negative values imply a compression of the upper adjacent vertebral body, and positive values imply a loss of height of the ECD.(c) Subsidence coefficient difference (∆SC) at the consecutive follow-up intervals.A negative value implies compression of the upper adjacent vertebral body, and a positive value implies subsidence into the cranial or caudal adjacent vertebral body.Preop.-pre-operative,postop.-post-operative,∆-difference.
Figure 5 .
Figure 5. (a) Progression of the bisegmental kyphotic angle (BKA) over time.Negative values imply kyphosis, and positive values equal lordosis.(b) The height coefficient over time, pictured as height coefficient difference (∆HC), is calculated by subtracting the height coefficient at the given follow-up from the height coefficient early post-operative.Negative values imply a compression of the upper adjacent vertebral body, and positive values imply a loss of height of the ECD.(c) Subsidence coefficient difference (∆SC) at the consecutive follow-up intervals.A negative value implies compression of the upper adjacent vertebral body, and a positive value implies subsidence into the cranial or caudal adjacent vertebral body.Preop.-pre-operative,postop.-post-operative,∆-difference.
Figure 8 .
Figure 8. Bony fusion assessment on a conventional radiograph ((a) lateral view, (b) a.p. view) 12 months after surgery and CT scan ((c) sagittal plane, (d) frontal plane) of the same patient.The CT scan was acquired one month after the conventional radiograph.ECD is in position C5, with anterior plate-screw osteosynthesis (carbon plating, carbon screws) covering segments C4/C5 and C5/C6.The radiolucency of the ECD allows for an accurate assessment of bony fusion on a conventional radiograph.Both images show a bony fusion grade I (definitive fusion).
Figure 8 .
Figure 8. Bony fusion assessment on a conventional radiograph ((a) lateral view, (b) a.p. view) 12 months after surgery and CT scan ((c) sagittal plane, (d) frontal plane) of the same patient.The CT scan was acquired one month after the conventional radiograph.ECD is in position C5, with anterior plate-screw osteosynthesis (carbon plating, carbon screws) covering segments C4/C5 and C5/C6.The radiolucency of the ECD allows for an accurate assessment of bony fusion on a conventional radiograph.Both images show a bony fusion grade I (definitive fusion).
Table 1 .
Demographic data of the study population.
Table 1 .
Demographic data of the study population.
Table 2 .
General surgical information.
Table 3 .
Detailed clinical outcome comparing pre-operative status to final follow-up.ASIA Impairment Scale grades and clinical outcome were classified using Odom's criteria across consecutive follow-up intervals.The percentage of patients at a follow-up interval is shown. | 7,868 | 2024-04-12T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Spatial Disparities of COVID-19 Cases and Fatalities in United States Counties
This paper examines the spatial and temporal trends in county-level COVID-19 cases and fatalities in the United States during the first year of the pandemic (January 2020–January 2021). Statistical and geospatial analyses highlight greater impacts in the Great Plains, Southwestern and Southern regions based on cases and fatalities per 100,000 population. Significant case and fatality spatial clusters were most prevalent between November 2020 and January 2021. Distinct urban–rural differences in COVID-19 experiences uncovered higher rural cases and fatalities per 100,000 population and fewer government mitigation actions enacted in rural counties. High levels of social vulnerability and the absence of mitigation policies were significantly associated with higher fatalities, while existing community resilience had more influential spatial explanatory power. Using differences in percentage unemployment changes between 2019 and 2020 as a proxy for pre-emergent recovery revealed urban counties were hit harder in the early months of the pandemic, corresponding with imposed government mitigation policies. This longitudinal, place-based study confirms some early urban–rural patterns initially observed in the pandemic, as well as the disparate COVID-19 experiences among socially vulnerable populations. The results are critical in identifying geographic disparities in COVID-19 exposures and outcomes and providing the evidentiary basis for targeting pandemic recovery.
Introduction
Local population dynamics and sociodemographic characteristics have proven influential on the prevalence of coronavirus disease 2019 (COVID- 19) impacts and transmissibility. However, early mitigation measures did not account for place-based differences in exposure and outcomes at comparative and more localized spatial scales [1]. Unprecedented United States government mitigative policies led to drastic unemployment increases to prevent the spread of COVID-19, which did not occur equally across the country [2]. Initial COVID-19 cases and fatalities in the US first appeared in densely populated urban centers [3], before spreading to rural communities throughout the country [4]. Rural populations continually face a unique set of challenges relating to unequal access to healthcare and a higher prevalence of underlying health conditions, thus placing them at higher risk of negative COVID-19 outcomes [5]. Underlying disparities in social vulnerability, community resilience and COVID-19 mitigation measures among US communities and urban-rural places must then influence COVID-19 exposure and outcomes, but how?
Existing COVID-19 research has not integrated the full suite of social vulnerability measurements along with multiple societal elements of place that could explain patterns of COVID-19 cases and fatalities within local geographic contexts. Additionally, spatial analyses of initial community recovery based on unemployment across the US lack investigation, despite the vastly disproportionate impact of the pandemic mitigation orders on the job market. Applying measurements of social vulnerability, community resilience, government mitigation efforts and unemployment changes, this place-based geographical investigation examines spatial and temporal differences in COVID-19 exposures (cases) and outcomes (fatalities) at the county level, using statistical and geo-analytical methods. Our results identify how combined contextual correlates explain the geographic distribution of cases/fatalities and initial economic recovery throughout the first year of the pandemic (January 2020-January 2021). Three questions guide this analysis: 1.
What is the variability in county spatial and temporal patterns of COVID-19 cases and fatalities? 2.
What is the relationship between COVID-19 cases and fatalities, social vulnerability, community resilience and government mitigation measures and does this vary based on location (urban-rural)? 3.
Do differential spatial patterns of pre-emergent recovery exist using changes in unemployment as a proxy indicator?
Background
From the outset of the pandemic, research and popular concern focused on the disproportionate impact on minority, low-income and elderly populations [6][7][8]. Only recently has there been a focus on place, or what Perry et al. [9] term the precarity of place, in addition to urban-rural influences on COVID-19 experiences [10]. Research has mostly examined the geography of COVID-19 risk based on individual-level factors (e.g., social determinants of health) or community-based elements (e.g., resilience, social vulnerability, or public health mitigation measures) over a short study period or wave of the pandemic [11,12], or in specific sub-national areas [9,13,14]. Thus far, there has not been a longer-term (year-long) study for the US examining the spatial and temporal county-level variability in COVID-19 cases and fatalities that considers several influential elements of social context or measurements of pre-emergent economic recovery.
Socioeconomic and Spatial Disparities of the COVID-19 Pandemic
The social determinants of health and social vulnerability to environmental hazards are crucial elements to consider in evaluating the spatial and temporal dynamics of the COVID-19 pandemic. The social determinants of health are the non-medical factors (e.g., income, wealth, education, race and ethnicity, geographic location and gender) that can shape an individual's health in formative ways [15,16]. Social vulnerability is widely studied in geography, anthropology, economics, public health and psychology to identify inequalities and sub-populations that are more at risk to hazard events [17,18]. While definitions of social vulnerability may vary by discipline, it is widely agreed that not all people and places cope with and adapt to hazards equally, whether from institutional barriers, human and social capacities, or the physical environment [19][20][21]. In the context of COVID-19, minority status and crowded housing conditions have correlated with case rates, as well as overall levels of social vulnerability [11,22]. Spatial variability in COVID-19 exposures and outcomes among different demographic and socioeconomic groups highlights the role of race and non-English speakers in elevated case counts, as well as age and disability associated with elevated fatalities [5,12,23]. African Americans have a 2.7 times greater chance of being hospitalized for COVID-19 than white patients [24], as well as an increased likelihood of testing positive for COVID-19 [25,26]. Public health inequities arise from socioeconomic disadvantages [27] and COVID-19 is no exception.
In addition to examining how the spatial variance of socioeconomic variables impact the spatial distribution of COVID-19 cases and fatalities, recent research has found a geographic pandemic divide. Regionally, the Southeast, Southwest and New England experienced relatively more COVID-19 cases than other US regions [23]. These and other regional patterns [28,29] may not hold over a longer study time-period. Urban-rural disparities explored in earlier research found increased mortality rates [30] and higher prevalence rates of COVID-19 infection in urban counties [12]. Another study found higher standardized cases and fatalities in rural counties in just one US state [13], while others found severe negative impacts of the pandemic on rural unemployment and economic wellbeing [10]. Not only do rural residents have higher positive COVD-19 testing rates [26] and fewer hospitalization resources available for affected patients [31,32], but locational disparities exist in testing access and pandemic messaging in rural locations [33].
Community Resilience to Hazards
Resilience addresses people and places' ability to withstand the adverse effects of hazards, as well as their ability to recover from and adapt to hazards [34] at multiple scales (e.g., individual, family and community). Studies have represented the multidimensional aspect of community resilience using domains signifying social, physical, community, individual, economic, institutional, infrastructure and/or ecological characteristics [35,36]. Common variables used to measure resilience include community-level variables, such as wealth, participation in civic and religious organizations, redundancy of critical infrastructure and hazard mitigation planning [35,37]. Community resilience and social vulnerability are related, but they are not the opposite or inverse of each other, with empirical studies showing a negative relationship with moderate strength between vulnerability and resilience [38][39][40][41].
COVID-19 Mitigation and Pre-Emergent Recovery
Research confirms that mobility restrictions (i.e., work-from-home, shelter-in-place, or stay-at-home orders), physical distancing and mask mandates lower COVID-19 confirmed cases and fatalities [42][43][44]. Early analyses of the COVID-19 outbreak in Wuhan, China, support the significance of limiting all travel to effectively control the spread during public health response planning [45]. In the US, mitigative efforts via work-from-home orders lessened the spread of COVID-19 cases between some communities [46]. Yet, Berry et al. [47] found little evidence to support that shelter-in-place orders affected disease spread or fatalities. Comparisons of voluntary mitigative actions versus mandatory stay-at-home orders found that, in counties with existing voluntary behaviors, the mandates simply accelerated compliance [48]. However, not all sociodemographic groups have the social or economic capital available to partake in voluntary mitigative action and can thus suffer worse economic burdens from such imposed mitigation orders [49]. Additionally, some research has shown the relative effectiveness of government orders on mobility reductions and individuals' voluntary mitigation decisions based on political partisanship [50][51][52].
Recovery is "both a social process with specific short and longer-term outcomes and a physical process of replacing the damaged built environment (or reconstructing it) . . . " [53] (p. 5). Economic recovery as part of the social process requires not only containing the spread of the virus and developing the vaccine, but also making and implementing policies that protect people's livelihoods, minimize financial suffering and place the economy in a better position for a faster resurgence [54]. In response to the COVID-19 pandemic, many countries have launched economic recovery programs to mitigate unemployment and stabilize core industries [55]. The US labor market is undergoing tremendous stress because of the COVID-19 outbreak and mitigation efforts [56], with many individuals becoming unemployed and losing health insurance coverage as a result [57]. A survey conducted in the early stage of the pandemic showed that 43% of small businesses temporarily closed and that employment had fallen by 40% [58]. Those impacted most heavily from job loss include some highly socially vulnerable populations of low-income individuals and/or racial/ethnic minorities working in employment sectors most affected by mitigation policies [2,49,59]. By April 2020, all 50 US states began easing mobility or business closure restrictions to revive the economy [60].
Research Design and Methods
Counties are the primary spatial unit for operational levels of emergency management and public health statistics in the US, thus the appropriate unit of analysis. This study includes 3140 counties or county-equivalent places in all 50 states and the District of Columbia. Two low population county-equivalents in Alaska, Yakutat City and Borough and Hoonah-Angoon Census Area, were not included in the analysis due to missing COVID-19 reports in the dataset. Table 1 summarizes all spatial data inputs. Listed below are detailed explanations of the input data, spatial caveats and initial data manipulations.
COVID-19 Cases and Fatalities
Publicly available COVID-19 daily cumulative case and fatality counts were downloaded from The New York Times GitHUB [61] on 31 January 2021. The dataset includes counts for all US counties and was compiled based on reports from state and local governments, as well as local health departments, beginning with the first reported case in Washington State on 21 January 2020. Data included both confirmed cases (positive SARS-CoV-2 RNA laboratory test) and probable cases (based on criteria for symptoms, exposure and antibody testing). Confirmed deaths listed COVID-19 as the cause of death, while probable deaths were those with COVID-19 listed on the death certificate, but without a positive laboratory test. All counts for cases and deaths (confirmed and probable) used patients' county of residence. Daily case and fatality reports occasionally lacked an accurate county identifier for a patient, leading to some data recorded as "Unknown". In some instances, the availability of more accurate information enabled locational corrections to these unknown cases/fatalities, while, in others, they were not corrected [61]. In our analysis, 22 states had geographically unassigned cases and/or fatalities at the end of the study period, on 30 January 2021. Rather than eliminate these data, we proportionally distributed the data among counties based on existing ratios of county cases/fatalities in each state, by epidemiological (epi) week.
Calculated cumulative totals of standardized cases and fatalities (total cases or fatalities per 100,000 population) used the American Community Survey (ACS) 2019 5-year population estimates [62]. Daily case and fatality totals by county were aggregated per epi week to analyze temporal patterns.
A few key geographic issues in the data were fixed to create a consistent geo-referenced dataset for this investigation. In New York City, the dataset reported all cases as one geographic entity, not five separately representing the counties (Boroughs) in the city. Case and fatality values for the five New York City boroughs came from the New York City Department of Health GitHUB [72] and the proportions of cases/fatalities for each borough per epi week were calculated and applied to distribute The New York Times data into each borough. In Kansas City, Missouri, the dataset reported the city's cases and fatalities separately from the four counties that overlap the city (Cass, Clay, Jackson and Platte) and the county values did not include Kansas City cases or fatalities. A similar instance occurred in Joplin, Missouri, after 25 June 2020, when cases and fatalities reported for the city became separately counted and not included in Jasper and Newton county totals. An areal-weighted approach to attribute cases/fatalities to the surrounding counties of Kansas City and Joplin helped address this spatial data issue. While more sophisticated techniques exist to estimate populations in census boundaries that do not assume a homogenous distribution of the data, areal weighting is a relatively straightforward method of assigning data based on the area of the city intersecting county boundaries. Other studies using The New York Times dataset did not address these spatial anomalies in the data for those cities, indicating they were simply excluded from analyses [73][74][75].
Social Vulnerability and Resilience
The Social Vulnerability Index (SVI) from the Centers for Disease Control and Prevention (CDC) [76] and the Social Vulnerability Index (SoVI ® ) from the Hazards and Vulnerability Research Institute at the University of South Carolina [63] are the two most often cited quantitative measures of social vulnerability [77]. Although named the same, there are key differences in their composition and formulation. SoVI ® utilizes more sociodemographic and socioeconomic variables as proxies for social vulnerability than SVI. SoVI ® applies an inductive method of grouping variables that are highly correlated into factors of vulnerability, while SVI uses a hierarchical approach [78]. SoVI ® includes important place and health-based indicators of vulnerability that are excluded from SVI, such as the number of hospitals per capita and the percent of the population without health insurance, as well as economic indicators of vulnerability (e.g., percent employment in the volatile and seasonal service sector industry). Our research employs SoVI ® as the measure of social vulnerability due to these key differences and because SoVI ® has proven more reliable in studies validating the indices using disaster outcome measures [77]. Key variables inputted into SoVI ® include (but are not limited to) age, wealth, race, ethnicity and education level. SoVI ® values are relative and range from a low of −9.01 in Loudoun County, Virginia (least vulnerable) to a high value of 15.52 in Kusilvak Census Area, Alaska (most vulnerable).
The Baseline Resilience Indicators for Communities (BRIC) applied here is a wellknown quantitative measure for community resilience measurement. BRIC uses 49 variables that are separated into six capitals of resilience (social, economic, institutional, housing/infrastructure, environmental and community). Standardized variables have values ranging from 0 to 1, which are then averaged for each capital. The sum of the capitals has a theoretical range of 0-6 for each county, with higher scores representing more resilience and lower scores less resilience [64].
County and State COVID-19 Mitigation Efforts
To measure the COVID-19 mitigation efforts at both the county and state level, we analyzed data from four sources (Table 1). A binary system was created with zero (0) indicating the absence of the mandate and one (1) indicating the presence of a mandate (i.e., mask, emergency declaration, stay-at-home policy/order, business closures) for each of the three county level datasets. To avoid double-counting, triangulation among the three datasets produced a single value (0 or 1) for each county for all four mitigation measures. Scores ranged from 0-4 for county-ordered mitigation. Another mitigation dataset had state-level actions only (Table 1) and included emergency declarations, shelter-in-place orders, business closures and mask mandates. In addition, using a binary system, if there was a statewide mandate for any of the mitigation measures, all counties in the state received a score of one for the presence of that mandate and zero if there was no mandate. As with the county-specific mandates, scores ranged from 0 to 4 for statewide measures. We assessed the impact of mitigation methods for county-specific and statewide separately and in combination (adding the two scores so the hypothetical range was 0-8). Eighteen counties elected to go against their state-level mask policy and opt-out of the mandate, so these counties received a value of 0 instead of 1 for mask mandate in the dataset.
Pre-Emergent Economic Recovery
Even though COVID-19 continues to spread, an initial economic recovery is underway. Communities have begun the process of a return to normalcy by loosening restrictions on businesses as vaccinations increased and the federal government unveiled massive stimulus packages to encourage consumer spending and help those that have lost their jobs. To measure and identify where recovery was the strongest during our study period, the unemployment rate from the previous year (2019) was compared to the year of the pandemic (2020), then examined in monthly intervals. This enabled a relative comparison of places where unemployment rose or fell creating a proxy level of recovery in those places at the end of January 2021. Unemployment data were derived from the US Bureau of Labor Statistics [69] for each month from January 2019 to January 2021 and for each county in the U.S, except for Kalawao County, Hawai'i. The change in the unemployment rate from a year before the pandemic (2019) to the pandemic year (2020) was computed in monthly intervals; then, a summary measure of percent change in unemployment for the entire study period for each county was calculated. To calculate this measure, the 2019 monthly unemployment rate was averaged for the year, then subtracted from the value for 2020 calculated the same way, in order to provide the unemployment rate percent change from 2019 to 2020. We recognize that recovery is ongoing but want to test the spatial patterns of economic recovery with the distribution of cases and fatalities of COVID-19.
Urban-Rural Delineations of US Counties
The spatial delineations of urban-rural areas in the US vary amongst a few federal government agencies, including the CDC's National Center for Health Statistics (NCHS) [70] and the US Department of Agriculture (USDA) [71]. One of the main US urban-rural federal classification schemes currently employed at county level is the NCHS 2013 Urban-Rural Classification Scheme, which assigns one of six urbanization levels to US counties, with 1-4 representing metro counties and 5-6 being non-metro [70]. Another scheme is the nine-level USDA 2013 Rural-Urban Continuum Codes (RUCCs), with 1-3 assigned to metro counties and 4-9 to non-metro counties [71]. Creating a binary non-metro/rural (0) and metro/urban (1) categorization for analysis using either the NCHS or USDA schema results in the same designations for each US county.
Analytical Approach
A range of statistical methods informed our analysis, including descriptive and inferential statistics, geospatial statistics and GIS analytics. Microsoft Excel 2016 was used to manage the tabular datasets prior to statistical and spatial analyses. The SPSS 27.0 software (IBM Corp., Armonk, NY, USA) was used to examine statistical associations via correlation and difference of means testing between standardized cases/fatalities, location (urban/rural), statewide mitigation, county mitigation, unemployment percent change, social vulnerability and community resilience variables. Correlation testing can help describe the relationships between variable pairs, the strength of their linear association and the statistical significance [79]. Difference of means tests (independent sample t-tests) assessed the statistical difference of the means between the same variable but for the two independent groups of urban and rural counties [80].
We employed Spatial Scan Statistic (SaTScan) version 9.7 (SaTScan, Boston, MA, USA) for space-time cluster analyses of COVID-19 cases and fatalities across the contiguous US with Poisson and space-time prospective/retrospective models [13,23,28,29]. The removal of Hawai'i and Alaska was necessary for the space-time analysis due to contiguity requirements and coordinates for cases/fatalities were assigned based on the centroids of counties. The methodology for identifying clusters follows the SaTScan method under Poisson assumptions [81]. The space-time scan statistic is measured by a cylindrical window (with a circular geographic base and a height that reflects the time period of potential clusters), which is moved in space and time with different sizes to cover the study region and return the significant clusters. The adopted space-time model compared the values of cases/fatalities to an expected value if the spatial and temporal locations were independent [81] and the discrete Poisson model assumed that cases/fatalities follow a Poisson distribution based on the underlying county population. Both prospective and retrospective analyses are performed using space-time discrete Poisson models. In the retrospective analysis, the study region is scanned for both active clusters (lasting until the end date) and historic clusters (having ceased to exist before the study period end date), while the prospective analysis only detects active clusters. SaTScan results were limited to significant cluster outputs throughout the cumulative study period in order to focus the investigation on the greatest overall risk during the first year of the pandemic. For each cluster, we reported the counties' Relative Risk (RR), which is the estimated risk in the cluster divided by the estimated risk outside the cluster [81]. Monte Carlo testing (999 simulations) provided the basis for statistical significance assessments.
For the remaining geospatial analyses, we utilized Geographically Weighted Regression (GWR) in ArcMap 10.8.1 (ESRI, Redlands, CA, USA) to determine the associations between independent and dependent variables, while accounting for spatial heterogeneity using geographical weights [82]. Bivariate Moran's I tests were also performed to identify associations between two geographic variables as a measure of spatial autocorrelation [83]. Spatial lag regression modeling then examined all independent variables to determine their explanatory influence on the dependent variable (i.e., standardized cases/fatalities), which uncovers whether statistically significant spatial interactions exist directly and the strength of that interaction for each variable [84]. The GeoDA 1.18 program (Center for Spatial Data Science, University of Chicago, Chicago, IL, USA) was used for running the local bivariate Moran's I tests, set for 999 permutations (p-value of 0.05 for global Moran's I) and for the spatial lag, with a first-order queen contiguity weight assigned. Island counties (e.g., Nanucket, MA) without neighbors were included in the analyses and assigned a weight of 0 in spatial lag regression, while SaTScan cluster analyses assumed contiguity of these counties and performed the analyses as if they were connected to the mainland state. Case/fatality rates for US counties and SaTScan cluster data visualizations were created using ArcMap 10.8.1 and ArcPro 2.7 (ESRI, Redlands, CA, USA).
Results
Over 25 million confirmed COVID-19 cases and roughly 437,000 fatalities occurred in the US during our year-long study period. The results, organized by our research questions, begin with the spatial and temporal patterns and clusters, followed by the correlates of cases and fatalities and end with the geographic patterns of pre-emergent recovery.
Spatial and Temporal Patterns
The national trend in total cases and fatalities for the study period showed an expected periodicity by epi week. The first peak in cases occurred in April 2020, with secondary and tertiary peaks in July and December 2020. The highest newly recorded cases (1.7 million) arose in the first epi week of January 2021. The highest peak in newly recorded fatalities (23,142) also occurred in January 2021, during epi week 2.
Geographically, cumulative standardized cases were highest (>9786/100,000) in the Southwest (centered on Arizona), the Great Plains/Mississippi River Valley (particularly the Dakotas) and in the South (centered on Tennessee) (Figure 1). Lower case rates (<6878/100,000) were in northern New England, New York, outside of metropolitan areas, West Virginia and coastal Washington, Oregon and northern California. The geographic distribution of high standardized fatalities (>99/100,000) had a similar pattern as cases, with less spatial density. The lowest fatalities (<51/100,000) were in northern New England, portions of Virginia and West Virginia and most western states.
Spatio-Temporal Clustering
Using weekly intervals, both retrospective and prospective analyses detected similar clusters. For the upper bound limit, a circular window scanning for a maximum of 20% of the total at-risk population provided a more localized clustering (county scale) of cases/fatalities.
The SaTScan analysis showed three distinct and significant (p < 0.001) space-time case clusters from November 2020 to 31 January 2021 ( Figure 2). The earliest cluster (Cluster 2) centered in the Midwest and mid-South region started on 2 November 2020 and continued to the end of our study period. Higher relative risk levels were in Tennessee, Indiana and Illinois. Cluster 1, the largest areal cluster of cases, stretched across half of the country from the Great Plains to the western states. This cluster (9 November 2020-31 January 2021) contained most of the counties with the highest relative case levels ( Figure 2). The third cluster was in the northeast and covered the period from 16 November 2020 to 31 January 2021.
Case and Fatality Correlates
Results for our second research question helped explain the relationships between standardized cases and fatalities and this study's correlates. As expected, standardized cases and fatalities were moderately correlated with one another (Table 2). However, there were significant differences between urban and rural counties in standardized cases, fatalities and their correlates (Tables 2 and 3). While the correlation of cases and fatalities with location (rural/urban) was statistically significant, the association was relatively weak (r s = −0.122 for cases and r s = −0.146 for fatalities, where a positive coefficient denotes urban, a negative one denotes rural).
The pre-existing social vulnerability of counties positively correlated with standardized cases and moderately positively correlated with standardized fatalities. SoVI ® was strongly correlated with rural counties (r s = −0.554, p < 0.001). The difference of means (independent samples) tests confirmed significant differences in social vulnerability between rural and urban counties, with urban counties having lower levels of social vulnerability than rural ones. In contrast, there was no consequential association between the existing levels of community resilience and standardized cases or fatalities, despite positive associations with cases (r s = 0.101, p < 0.001) and negative correlations with fatalities (r s = −0.013, p = 0.452). The number of total governmental mitigation measures was negatively and significantly correlated with standardized cases (r s = −0.339, p < 0.001) and with standardized fatalities (r s = −0.199, p < 0.001). While governmental mitigation was mildly significantly correlated with urban counties (r s = 0.263, p < 0.001), there was a significant statistical difference between urban and rural counties in terms of the number of mitigation measures employed to reduce cases, with more measures undertaken in urban areas.
The GWR tests did not show any significant associations between tested variables of cases and fatalities with location, total mitigation score, SoVI ® and BRIC. A further test of the significance of the predictors of COVID-19 cases and fatalities employed spatial lag regression models based on Lagrange multiplier tests and residuals to predict (1) standardized cases using location, total mitigation score, SoVI ® and BRIC as independent variables and (2) standardized fatalities using the same predictors plus cases. The model prediction of standardized cases was moderate (R 2 = 0.434, p < 0.001), with higher resilience scores (β = 1420.21, p < 0.001) showing the most contribution, followed by lower numbers of mitigation actions undertaken (β = −238.94, p < 0.001), higher levels of social vulnerability (β = 95.14, p < 0.001) and urban locations (β = 26.38, p < 0.001). The model for standardized fatalities also performed moderately well (R 2 = 0.404, p < 0.001) with four significant predictors, including higher resilience (β = 34.94, p < 0.001), higher levels of social vulnerability (β = 7.95, p < 0.001), mitigation actions (β = 2.81, p < 0.05) and standardized cases (β = 0.01, p < 0.001).
Local Spatial Clusters of Cases and Fatalities
Bivariate mapping was used to identify clusters of high cases/fatalities (hot spots), clusters of low cases/fatalities (cold spots) and spatial outliers. The Global Moran's I test within the entire study area showed significant (p < 0.05) spatial association for all bivariate pairs, except fatalities and social vulnerability. The association between location and cases and fatalities for the entire study was random. The local Moran's I test, however, showed considerable spatial clustering of cases and fatalities with social vulnerability and community resilience (all local clusters/outliers are significant, p < 0.05).
The bivariate local Moran's I test indicated a rather significant cluster of high cases and high fatalities with high levels of social vulnerability in southern Texas, New Mexico, western Mississippi and South Dakota (Figure 3a,c). A cluster of high cases and high resilience was in the northern Plains and western Midwest counties (Iowa, Nebraska and the Dakotas) (Figure 3b). However, the most significant clustering of high fatalities and high levels of community resilience was much smaller and concentrated in upper Midwest counties (Figure 3d). There was higher statistical significance with the discordant pairs-high fatalities and low resilience and low fatalities and high resilience.
Mitigation Clusters
Our second research question also addressed the relationship between mitigation actions and cases/fatalities and relationship variability based on rural-or urban-designated counties. There was a negative correlation between mitigation and cases/fatalities ( Table 2), indicating that the more overall mitigation actions in counties, the lower cases and fatalities rates. In looking at the differences in mitigation actions themselves, we found slight but significant differences between urban and rural counties, with urban counties having undertaken more actions than rural counties (urban: x = 4.98, SD = 1.141; rural: x = 4.31, SD = 1.176).
The spatial patterns of standardized cases and fatalities in relation to mitigation action levels were similar across US counties (Figure 4a,b). Counties with more mitigation actions and lower cases/fatalities were generally in the West and the East, whereas higher cases and fatalities with fewer mitigation actions were dominant in the Great Plains states. There were clusters of high cases and more mitigation actions in the Southwest, with smaller clusters in California, Montana and south Florida.
Pre-Emergent Recovery Spatiality and Drivers
Our third research question addressed whether there were geographic variations in unemployment change and how this related to COVID-19 cases and fatalities. Change in the unemployment rate was our proxy outcome measure to monitor pre-emergent recovery. A larger percentage increase in unemployment from 2019 to 2020 suggested a slower initial recovery.
Counties ranged in total unemployment percent change between −33.67% in Blaine County, Nebraska, and 586.31% in Maui County, Hawai'i. Temporally, the greatest range in percent change in the unemployment rate occurred in April-May 2020 with a precipitous drop in June, where it plateaued until October and then began slightly narrowing ( Figure 5) in range until January 2021. The seemingly stable status (at the end of our study period) was still more than the difference between the unemployment rate in the earlier months of January and February 2020 (i.e., before business closures). A median line for counties' unemployment percentage change showed a peak in April and a declining slope afterward, as the recovery process began. The change in unemployment from 2019 to 2020 was negatively correlated with standardized cases (r s = −0.105, p < 0.001) and fatalities (r s = −0.084, p < 0.001), but the strength of these associations was very weak ( Table 1). As the percentage of unemployment increased from the previous year, the number of COVID-19 cases decreased slightly. The changes in unemployment were moderately associated with urban areas (r s = 0.381, p < 0.001) and the difference of means test confirmed a statistically significant difference between urban and rural areas ( Table 2). A snapshot of the status of pre-emergent recovery at the end of our study period (January 2021) shows an uneven pattern across the US, with greater increases in unemployment (less recovery) in urban counties (shown in dark hues), when compared to rural ones (shown in lighter hues) ( Figure 6). Regionally, unemployment changes were more stable in some areas in the South, Great Plains and the western US, outside of the major metropolitan areas and tourist destinations.
Employing a spatial lag model with percent change in the unemployment rate as the dependent variable and cases, fatalities, location (dummy variable with rural = 0, urban = 1), total mitigation score, SoVI ® and BRIC generated an R 2 = 0.511 (p < 0.001). The most important predictors of pre-emergent recovery during the pandemic first year were urban locations (β = 8.34, p < 0.001), more mitigation methods (β = 3.68, p < 0.001) and lower levels of pre-existing social vulnerability (β = −1.59, p < 0.001).
The local association between cases and percent unemployment in 2020 was more significant in central US counties, where both low-low clusters and high-low outliers suggested a relatively lower unemployment change (Figure 7). However, parts of the West (e.g., California, Texas, Colorado, North Dakota and Montana), Michigan, Tennessee, South Carolina, Florida and the Northeast (New Jersey through Maine) contained high-high clusters with adjacent low-high county outliers, indicating higher rates of unemployment change.
Discussion
Investigating the spatial and temporal disparities in the first year of the COVID-19 pandemic revealed significant regional clustering patterns across US counties that varied between the annual and weekly scales. While disparate COVID-19 cumulative experiences were initially evident in the Great Plains, Southwestern and Southern regions, these patterns changed when examining the data by epi week. Space-time analyses found three distinct clusters of cases in the West/Southwest, Ohio-Mississippi Valley and Northeast from November 2020 to January 2021. Results of retrospective and prospective analysis spotted identical clusters due to the significant increase in cases at the end of 2020. Four distinct space-time clusters for fatalities showed an early cluster in the New York metropolitan region, a second cluster in the South, a third cluster in the Midwest and Great Plains and the fourth cluster in the West. In comparison with studies using the same methodology albeit focused on earlier stages of the pandemic, the identified clustering patterns are slightly different in the central and western US but rather similar in the northeastern and southern US [23,28,29].
Relatively few studies to date systematically examined nationwide urban-rural differences in COVID-19 cases and fatalities over the entire first year of the pandemic. The relationship between COVID-19 cases and fatalities and the correlates in the study exposes distinct urban-rural differences in US counties-more standardized cases and fatalities in rural counties than in urban counties. There were higher numbers of mitigation measures in urban areas, which also contained lower rates of standardized cases and fatalities. Our study confirms previous research investigated over a smaller geographic area that found increased standardized cases and fatalities in rural US counties [13], but contradicts a nationwide study executed earlier in the pandemic finding higher mortality rates in urban counties [30].
The explanatory influence of pre-existing social vulnerability, community resilience and mitigation actions correlating with COVID-19 cases/fatalities was clear in this study. In general, as the level of social vulnerability increased within a county, so did cases and fatalities. Social vulnerability was more associated with fatalities than cases, while community resilience had a less significant influence on cases or fatalities. Governmental mitigation actions had a significant association with lower cases and fatalities per 100,000 population, indicating that, as restrictions increased, cases and fatalities decreased. Spatial lag regression patterns of cases and fatalities then revealed resilience levels to be the most influential indicator, followed by social vulnerability and mitigation actions. The spatial lag results revealing higher fatality rates to be associated with higher resilience levels and increased mitigation actions seem to be a product of the Great Plains states' experience. This area holds some of the highest levels of community resilience in the country, but high resilience does not always equate to less risk, especially in a relatively rural region with a concentration of high exposure workplaces and populations (e.g., meatpacking and immigrant labor). Additionally, the usage of one mitigation measure for the entire study period may be partially responsible for the relationship uncovered between high fatality rates and more mitigation actions. Furthermore, the relatively rural counties within this region, which are found to implement fewer mitigation actions than their urban counterparts, could be influencing the spatial lag predictors. The relationship between COVID-19 cases and fatalities and the correlates in this study exposes distinct place-based disparities among US communities not previously uncovered, particularly the strength of applying a social vulnerability or community resilience indicator. This study also supports previous research that found disadvantaged and socially vulnerable populations suffered larger, disproportionate burdens from exposure to COVID-19 [11,22].
Previous unobserved differential spatial patterns of unemployment change as a proxy of pre-emergent recovery were uncovered in this analysis, with higher observed changes in portions of the West, Midwest, Southeast and Northeast. Analyses show that the unemployment change rate had a stronger relationship with urbanity, adoption of mitigation actions and lower levels of social vulnerability, while it had an insignificant relationship with cases and fatalities. Reduced public interaction due to job loss partially explained the relationship between decreased COVID-19 cases and increased unemployment. Many counties with the highest rates of unemployment change were associated with tourismbased economies (e.g., Hawai'i), or densely populated urban areas, where there are more people employed in jobs that were lost due to mitigation policies. The general median trend-line for unemployment change rate showed an improvement (i.e., initiated recovery); however, there are significant disparities between counties with a wider range of difference in the unemployment rate in comparison to the pre-event situation.
The results of this analysis provide empirical evidence for COVID-19 spatial and temporal disparities within the United States context, but they also possess internationallevel applicability. Countries across the globe can apply this research methodology to uncover local spatial and temporal clustering of COVID-19 cases and fatalities, test the explanatory relationship of correlate factors using the best available data and gain a better understanding of international-level unemployment recovery. Each country's COVID-19 experience was guided by their existing healthcare system, political decisions to mitigate and local population dynamics. An increased understanding of social determinants of COVID-19 and socially vulnerable population's experiences during the pandemic outside the United States is important for equitable recovery. As this research shows, other nations must consider multiple social and environmental factors when attempting to explain the spatial and temporal diffusion of COVID-19 cases and fatalities.
Limitations
Several limitations exist in this study, primarily relating to the geospatial data. Inconsistencies in COVID-19 reporting and testing led to imperfect case and fatality data, due to issues such as testing shortages and unknown county identifiers of patients. COVID-19 data may also not be an exact match to official state or county totals, due to differences in reporting between government agencies and The New York Times data collection methods. The dataset applied here also contained partially missing data for three counties in Alaska and Hawai'i, as well as the removal of those states completely for spatial analyses, but our results are no less important. Another important drawback of this investigation relates to the way SaTScan generates clusters in a circular shape, which may be limiting, due to the possibility of spatial heterogeneity within a study area. However, SaTScan's Poisson model is based on a circular scanning window that still provides valuable spatial clustering information that help address our research questions. Our unit of analysis, US county level, may also pose a limitation due to the modifiable areal unit problem (MAUP), which can lead to changes in analysis results based on imposed geographic boundaries. A similar limitation exists due to the uncertain geographic context problem (UGCoP), which can emerge from uncertainty in the spatio-temporal contextual influence of area-based attributes on individual decision-making behaviors or outcomes [86]. Analyses at the census tract or zip code level, for example, may reveal different localized spatial patterns. However, county-level data were the only spatial scale available for US COVID-19 cases and fatalities and county level spatial units are commonly used in geographic and public health analyses. A final notable limitation is that of our proxy indicator of pre-emergent recovery, since the event (i.e., pandemic) is not over yet and the traditional definition of recovery does not match what is used here, thus our use of the term pre-emergent recovery. Furthermore, the unemployment percent change only highlights one aspect of recovery measurement and a holistic view on recovery requires additional indicators and a wider timeframe extending to the post-pandemic era.
Future Research Directions
Future research directions based on the results of this study could first look deeper at the dynamics of COVID-19 spatial diffusion at a more refined urban-rural categorical scale, rather than applying a binary urban versus rural schema. Other pandemic indicators relating to mitigation, such as access and receipt of vaccinations, could provide interesting explorations within and among counties of varying social vulnerability and community resilience levels, as well as political leanings, as they relate to vaccination mitigation. As more time passes since the start of the pandemic, further data have also been collected and made publicly available regarding the socio-demographic characteristics of COVID-19 patients that could also allow for expansion of our empirical understanding of spatiotemporal impacts on socially vulnerable populations.
Conclusions
To the best of our knowledge, this is one of the first studies to assess the spatial disparities of COVID-19 cases and fatalities, identify the influence of social vulnerability, community resiliency and government mitigation actions on standardized cases/fatalities across all US counties during the initial year of the pandemic. Additionally, few geographic studies have assessed the spatial variability of unemployment change rate as a measure of pre-emergent recovery, which is a crucial element directly related to the economic impact of the pandemic. Case and fatality spatial clustering found different clustering patterns than previous spatio-temporal studies executed earlier in the pandemic. Another key takeaway is the confirmation of certain urban-rural patterns observed early in the pandemic, proving those patterns of exposures and outcomes remained consistent through January 2021. More importantly, the results of this study are important for identifying place-based differences in COVID-19 exposure and outcomes based on community contextual factors and their practical application in targeting pandemic recovery at the local level. Finally, this unique longitudinal methodology applied publicly available and/or repeatable data that can guide future studies considering additional correlates for COVID-19 recovery not only in the US, but also internationally. | 9,802.8 | 2021-08-01T00:00:00.000 | [
"Economics"
] |
Designing electronic properties of two-dimensional crystals through optimization of deformations
One of the enticing features common to most of the two-dimensional electronic systems that are currently at the forefront of materials science research is the ability to easily introduce a combination of planar deformations and bending in the system. Since the electronic properties are ultimately determined by the details of atomic orbital overlap, such mechanical manipulations translate into modified electronic properties. Here, we present a general-purpose optimization framework for tailoring physical properties of two-dimensional electronic systems by manipulating the state of local strain, allowing a one-step route from their design to experimental implementation. A definite example, chosen for its relevance in light of current experiments in graphene nanostructures, is the optimization of the experimental parameters that generate a prescribed spatial profile of pseudomagnetic fields in graphene. But the method is general enough to accommodate a multitude of possible experimental parameters and conditions whereby deformations can be imparted to the graphene lattice, and complies, by design, with graphene's elastic equilibrium and elastic compatibility constraints. As a result, it efficiently answers the inverse problem of determining the optimal values of a set of external or control parameters that result in a graphene deformation whose associated pseudomagnetic field profile best matches a prescribed target. The ability to address this inverse problem in an expedited way is one key step for practical implementations of the concept of two-dimensional systems with electronic properties strain-engineered to order. The general-purpose nature of this calculation strategy means that it can be easily applied to the optimization of other relevant physical quantities which directly depend on the local strain field, not just in graphene but in other two-dimensional electronic membranes.
Introduction
With their intrinsic two-dimensionality, "electronic membranes" are easily pulled or pinched by atomic-scale tips [1,2,3], can be made to conform to the substrate topography [4,5,6], can be inflated as balloons [7], can be stretched [8] or bent [9], crumpled on demand [10], and so on. Hence, two-dimensional crystals are an excellent case (and opportunity) of correlation between electronic behavior and shape with tremendous implications in bridging soft and hard condensed matter. For example, if a physical property is sensitive to the state of deformation of the system it can be used to monitor its shape, strain, etc.; conversely, the shape variables can be manipulated so that the physical quantity in question behaves in a desired way, has a certain magnitude, or a particularly useful spatial profile. In addition, the fact that some of these twodimensional electronic membranes can be easily, and non-detrimentally, embedded in living tissues, organs or plants [11,12], brings the tantalizing prospect of using them in bioelectronics. The method to be discussed next can be a valuable tool there, in the cases where the system's functionality is determined by the shape or deformation state of the membrane.
To be specific-but by no means implying a limitation in scope-consider the problem of strained graphene. It is well-established that a mechanically-strained graphene sheet is very resilient [1] even in polycrystaline form [13,14], and has altered electronic transport properties. In particular, and among other features, it exhibits an unconventional contribution in the electron-phonon coupling leading to the emergence of so-called pseudomagnetic fields (PMF) [15,16,17,18]. These fields appear naturally in the effective (low-energy) description of the electronic problem in deformed graphene, and are a consequence of the peculiar lattice structure. Briefly, the celebrated Weyl-Dirac equation that captures most of the electronic phenomenology of graphene (H = v F p·σ for one of the K points in the Brillouin zone) is corrected in the presence of lattice deformations in a way that amounts to substituting p → p+eA, where A encodes all the details of the deformation and how it perturbs the electronic hopping amplitudes (defined below) [15]. As a result, even though A is not a magnetic vector potential, the actual dynamics has the same characteristics and the Dirac electrons in graphene react to static and non-uniform lattice deformations as though they were under the influence of an effective magnetic field, with all the consequences that a magnetic field brings to electronic motion, except that time-reversal symmetry is not broken and, thus, A will have an opposite sign for the effective Hamiltonian at the time-reversal transformed K point. One such consequence is the modification of the electronic energy spectrum with the development of local Landau levels for certain lattice deformations. This has been recently confirmed by local scanning tunneling spectroscopy (STS) measurements on nanometer-scale graphene nano-blisters which revealed Landau level resonances associated with PMFs in the range 300-600 T [19,20]. Equally interesting spacedependent Fermi velocities have also been reported in recent experiments on strained graphene [21], bringing this other theoretical prediction [22,23,24] and implication of (a) (b) Figure 1.
The two dimensional solution of Guinea et al. [26]. (a) An initially circular and isotropic graphene sheet is deformed to a rounded triangular shape. (b) The magnitude and direction of local stretch is indicated by the ellipses, which are the images under the deformation of small circles in the undeformed sheet.
non-uniform deformation fields closer to reality.
The possibilities associated with these discoveries and the confirmation of the drastic impact that moderate lattice deformations can have in graphene's electronic spectrum have spurred researchers to investigate deformation modes allowing a degree of control over PMFs that can be tailored for specific ends, such as electronic confinement, guiding, and so on. This is a concept known as strain engineering or straintronics [25,26,6,27,28].
Since the electronic dynamics can be straightforwardly determined once a spacedependent (pseudo)magnetic field B(X) is prescribed, and since much is already known about the behavior of Dirac electrons in graphene under the influence of magnetic field profiles such as barriers, wells, channels, and so on, it is natural to approach this strainengineering problem from the perspective of seeking which deformation fields applied to the carbon lattice lead to that prescribed PMF profile. As will be clear in subsequent sections, the solution is not unique. If not for anything else, this should be clear from the fact that there is a "gauge" freedom in selecting the vector potential A from B = ∇×A. The simplest of such problems is to determine which displacement fields lead to a strictly uniform (space-independent) B. The first notable theoretical investigation along these lines was that of Guinea et al. [26], who restricted their analysis to deformations in the plane. In this regime the PMFs are linear in the displacement field, allowing one to calculate an in-plane deformation field giving rise to any given PMF. In particular, to generate a constant or mostly constant PMF requires a characteristic deformation with 3-fold symmetry (see Figure 1), and the magnitude of the resulting PMF depends explicitly on the relative orientation of the deformation field and the underlying graphene lattice. This particular strain configuration has been recently explored in experiments on "artificial graphene" [29].
Extending deformations to three dimensions introduces nonlinearity into the strain pattern width pattern depth pressure Figure 2. Experimental situation to be modeled and taken throughout this report as a practical example of our method to solve the inverse strain-engineering problem in graphene. Graphene is placed on a patterned substrate with which it interacts via Van der Waals forces. Hydrostatic pressure and substrate profile are the two control parameters here, and the former is used to control the degree of conformation of graphene the substrate.
field, and such simple solutions are no longer available. Continuum mechanical theoretical investigations in simple geometries such as one-dimensional bending [30,31] or radially-symmetric bubbles [32], and atomistic simulations of graphene sheets adhered to nanoscale patterned substrates [33,34,35] are examples of forward problems: calculating the PMF associated with a certain deformation. But such approaches are unlikely to solve the inverse problem of finding the deformation mode required to produce a given PMF. In addition, given the current surge of experimental interest in deliberately inducing non-uniform strains in graphene, various possible routes are being explored [6,20,36,37]. To be experimentally relevant, an attempt to effectively tackle the crucial inverse problem should be generic enough to encompass such diverse means to experimentally generate the desired strain fields. This report presents a general-purpose framework which may be used to solve such inverse problems in graphene. In particular, for a given target PMF and experimental configuration, the method aims to find the optimal deformation control that, when applied to the graphene sheet, produces the desired PMF. Desired PMF refers to any specified space dependence of B(X). Deformation control is the name for the geometric and mechanical parameters of the experiment that may be varied to change the deformation field. In the 2d example of Guinea et al. [26] the deformation control is the displacement field applied to the outer boundary. In the case of graphene adhered to a patterned substrate the shape of the substrate performs that role. In this particular setup, which we will use extensively as an illustrative example in this report, the graphene sheet is assumed to have been transferred onto a patterned substrate, and forced to conform to its shape by combined hydrostatic pressure and adhesion forces (see Figure 2). The aim in this case is to find the substrate pattern and pressure (the two deformation controls for this example) for which the deformed graphene sheet exhibits a desired target PMF. But we underline that the approach is straightforwardly applicable to any other target quantity with a known dependence on the strain field.
We begin by summarizing the elastic properties of graphene and the elastic plate equations that govern its deformation when considered as a continuum elastic membrane. We then discuss the coupling of local deformations to the electronic degrees of freedom by means of the PMF and the optimization framework that forms the basis of our solution method. This is followed by a summary of the numerical algorithm used to solve the problem and, finally, as an example calculation, we present the computed substrate shapes that generate various predefined PMFs in an overlaid graphene sheet, and discuss the versatility of our framework for application in different experimental and theoretical scenarios well beyond the example calculations shown here for illustration. For completeness, various technical considerations and details are included as appendices to the main text.
Graphene's elastic parametrization
The deformation of graphene is modeled using the equations from continuum elasticity. This formulation is chosen for its applicability across a wide range of lengthscales. In spite of recent developments based on discrete differential geometry to directly relate atomistic configurations with electronic properties of the type we envision [24], an atomistic approach to the elastic relaxation problem becomes easily unfeasible at scales of a few nanometers due to the intrinsically more numerically demanding nature of inverse problems.
The deformation of a graphene sheet is thus described in terms of its deviation from a flat two-dimensional surface. The point X = (X, Y, 0) is transformed to r = (x 1 , x 2 , x 3 ) = (x, y, z) in three-dimensional space, where x α = X α + v α (X 1 , X 2 ), and z = w(X, Y ). The deformation measures which describe the sheet's local stretching and bending are respectively the strain and curvature tensors ε αβ and ρ αβ . Since these are complicated to write in terms of the displacement components, in practice simplified forms are used (for completeness, a detailed discussion of the form of ε αβ and ρ αβ is included in Appendix A). The most common simplification is perhaps the von Kármán approximation, which uses the expressions The stress and moment resultants are assumed to be isotropic and linear in the strain and curvature tensors: where we have used Einstein's summation convention, and ν being the Poisson ratio and C, D the stretching and bending moduli, respectively. To calculate the stretching modulus C, we use the results of Wei et al. [38] which, in our notation, are This value of ν agrees with the experimentally-measured Poisson ratio in graphite [39]. The value used for the bending modulus was that of Kudin et al. [40], calculated ab initio as D = 1.46 eV = 2.34 × 10 −19 N m. Note that the moduli C and D are independent in our formulation of the elastic response of graphene, which is treated as a purely two-dimensional sheet. This means that thickness is not a parameter in our modeling. We emphasize this aspect because graphene's elasticity is often modeled by treating it as a three-dimensional material which is thin in one dimension, i.e. a conventional elastic thin plate. In those cases the stiffness and bending moduli are often written in terms of the three-dimensional Young's modulus E and the thickness h: where a typical value h ≈ 0.3 nm for graphene's effective thickness [41] is used. For example such expressions have been used to cite graphene's Young's modulus as of the order of 1 TPa [42,43,1]. While this may be useful to convey the scale and exceptional strength of graphene, the same numbers lead to an inaccurate value for the bending modulus D. Treating graphene as a continuous 3D elastic object might be a convenient approximation, but in keeping with graphene's two-dimensional nature, we retain the parameters C and D as our main quantities here rather than express them in terms of Young's modulus E.
Equilibrium conditions
In addition to the kinematic and constitutive equations for a sheet of graphene, one must establish the equations of force balance to close the system. These are typically found by minimizing the potential energy functional consisting of two terms: E elast , the stored elastic energy, and E ext , the potential energy associated with external forces applied to the sheet. The latter may be surface tractions or adhesive forces (for simplicity we neglect any forces explicitly applied to the edge of the graphene sheet). The two energy terms are given by For now the energy density of external forces, V [w, v 1 , v 2 ; λ i ], is left unspecified. However, we do note that it is in this term that the influence of the control variables λ i is encoded; these may include, for instance, a parametrization of an underlying substrate, or the components of a surface traction field. The specific example corresponding to Figure 2 will be presented in detail later.
In the standard variational formulation of the problem the potential energy is minimized by setting its first variation to zero, giving us three weak form equations for v 1 , v 2 , and w. However, this requires some regularity in the behavior of the transverse displacement w: its first derivative must be continuous (C 1 ). Choosing C 1 elements in an arbitrary triangular discretization is not trivial, however. To overcome this difficulty, we use a mixed variational principle [44], based on the work of Herrmann and Miyoshi [see 45,46,47,48], which involves treating the moment tensor M αβ as a separate variable. This allows us to treat the variables as continuous, and affine over each triangular element. For a graphene sheet with clamped conditions at the boundary, the six weak form equations that result are (see Appendix A.2 for a detailed derivation): As will be clear shortly, these equilibrium equations provide the physical constraints to the optimization procedure. Our task is to seek a set of control parameters (substrate topography, boundary shape, etc.) that, upon solution of the variational problem to find the equilibrium configuration of the elastic medium, yields a PMF distribution that best approaches the prescribed target. The implementation of this optimization is done numerically. We have chosen to use piecewise affine finite elements combined with a patch recovery method in our calculations for their simplicity and ease of implementation (see Appendix E). But it should be noted that the method allows higher-order elements to be used, as long as one ensures that those formulations are stable and solvable.
Coupling deformations to electrons
To the weak-form elastic equilibrium equations we must add an equation linking the strain field to the generated PMF, B(X). This is because we wish to find the deformation field that best approximates B(X) to a desired target, sayB(X). The origin of this PMF that appears in the low-energy effective Hamiltonian of deformed graphene is the local modification of the electronic hopping amplitudes, t, between neighboring carbon atoms brought about by the space dependent deformation of the crystal lattice. The hopping is constant in the perfect crystal: t 0 = 2.7 eV. But, since t depends strongly on the inter-atomic distance, any local change caused by a deformation leads to perturbations to this equilibrium value and, hence, more generically, t(X i , X i + n) = t 0 + δt(X i , X i + n). The presence of δt, which is a relatively small perturbation to t 0 in practical situations, adds a correction to the low-energy Dirac-like Hamiltonian that emerges from a tight-binding description of the electronic hopping among p z bands of adjacent carbon atoms. The effective Hamiltonian around the point K = (4π/3 √ 3a, 0) in the first Brillouin zone has the form [15,16] where σ is a vector of Pauli matrices, and v F = 3t 0 a/2, with a = 1.42Å the carboncarbon distance in equilibrium. For deformations on scales that are large compared to a, the components of the pseudomagnetic vector potential A = A x e x + A y e y are explicitly given by [16] A where c = −∂ log t(r)/∂ log r| r=a (see Appendix B). For static deformations, a value c ≈ 3.37 captures the changes in various physical properties arising from straininduced modifications of the π bands in agreement with first-principles calculations [49,50,51,52,53,54]. We note, however, that the effective low-energy Hamiltonian (8) contains only the leading order corrections arising from non-uniform deformations; further expanding in higher orders of smallness in the strain magnitude and the momentum with respect to K leads to terms that introduce, for example, Fermi surface anisotropy [55,54] and space-dependent v F [23,24,56,57]. For simplicity, since we want to tailor only the PMF distribution as illustration of the method, and to keep the focus on the optimization framework rather than the details of the different levels of approximation for the effective strain-dependent Hamiltonian, we shall focus the subsequent analysis on the Hamiltonian (8). But it should be clear that, as far as the optimization procedure is concerned (which does not take into account the energy of the electronic system), the particular form of H is only relevant in order to identify the target quantity that we wish to optimize and its expression in terms of the strain components, as in (9). If instead of the PMF we were interested in optimizing, for example, towards a desired space-dependence of the Fermi velocity [23] or that of the deformation potential [16], the method requires only the specification of its functional dependence on strain. Finally, the pseudomagnetic field itself, B, being defined as the 2D curl of A, reads: As noted above, by virtue of our choice of piecewise affine finite elements for the numerical interpolation, the six variables v 1 , v 2 , w, M 11 , M 12 , and M 22 are treated as continuous, and affine over each triangular element. As a consequence, the strain field will be discontinuous and constant in each triangular element, leaving the PMF (10) undefined within this interpolation scheme. To overcome this we use the technique of patch recovery [58] detailed in Appendix E. In brief, this is a mechanism that uses the discontinuous strain data ε αβ to recover a strain field ε rec αβ of the same type as the primary variables: continuous and affine over each element. The derivative of ε rec αβ is well-defined, and thus so is the PMF if it is calculated using this recovered strain field.
Optimization
Solution of the weak form variational (equilibrium) equations constitutes the forward problem: in other words, given a set of control variables (here chosen to be substrate shape, encoded in the external potential V ), what deformation and PMF do these conditions impose on the graphene sheet? This report is aimed at answering the corresponding inverse problem: what are the control variables that will give rise to a desired PMF?
This inverse question is posed as an optimization problem, where an integral is minimized subject to the weak form equations written explicitly in Eqs. (7a-7f). If we letB(X, Y ) be the desired PMF in Lagrangian coordinates, we then seek to minimize the functional to find a PMF, B, which is (ideally everywhere) as close as possible to the prescribed B(X, Y ) (the reader will note once more at this stage that the quantity B[w, v 1 , v 2 ], which is here associated with the PMF, can be replaced by any other of interest, as long as its dependence on the strain or deformation field can be specified; the scope of applicability of this method extends, therefore, well beyond the PMF example chosen here for definiteness). This sort of optimization problem, however, is typically mathematically ill-posed, in the sense that there are infinitely-many solutions to such a minimization and, in order to find a solution which also satisfies the weak form equations, the numerical method often yields a solution which is not smooth. To counter this phenomenon, one must add to the minimization integral, I, a regularization term which penalizes high spatial variations in the control variables λ i : where η is a tunable parameter. The precise form of I reg [λ i ] will depend on what the control variables λ i represent; for the specific example of substrate shape optimization a typical form will be discussed below. Thus the full problem is to minimize, by varying the six state variables v 1 , v 2 , w, M 11 , M 12 , M 22 and control variables λ i , the objective function (12) subject to the six equations (7a-7f), solved for all admissible variations (·). In these expressions, B is given by (10), N αβ is given by (2), and ε αβ is given by (1). This problem is an example of a PDE-constrained optimization. For technical details regarding well-posedness and solution methods for such problems the reader is referred to Tröltzsch [59] or Borzì and Schulz [60]. This general procedure has also been applied to shape optimization in elastic plates experiencing differential growth fields [61]. Finally, in solving this problem numerically, all equations are adimensionalized in such a way that most variables are O(1) to ensure good numerical behavior (details described in Appendix C).
Practical application: optimizing substrate shapes
We wish to apply the previously-developed general theory to a specific example, to illustrate its practical implementation, and its utility in the problem at hand. The example we have in mind is of a graphene sheet forced to conform to a certain substrate shape by pressure and adhesive forces, as depicted in Figure 2. The elevation of the substrate is denoted z =ẑ(x, y) (we use coordinates x, y as Eulerian coordinates rather than the Lagrangian X, Y used in the definition of the graphene deformation). For definiteness, we assume that we are searching for target PMFsB of typical scale B 0 = 10 T. We further assume that the domain Ω, representing the shape of the computational domain, has typical dimensions L = 100Å.
The external forces
As discussed above, the external potential term V [v 1 , v 2 , w; λ i ] will have two components: the work done due to hydrostatic pressure, pw, and the adhesion energy V adh between the graphene flake and the substrate. To find the adhesion energy, consider the graphene sheet as a collection of atoms, interacting with a field V p (x, y, z) in R 3 -space, such that a particle dS of the plate, located at ( Given a substrate shape z =ẑ(x, y), we could determine the adhesion potential V p at every point in three-dimensional space. But this will be time-intensive in general, and for optimization problems could prove prohibitively expensive. As an alternative, assume that the gradient of the substrate is small, so that we can approximate V p (x, y, z) = J(z −ẑ(x, y)), where J(s) is some one-dimensional adhesion potential, such as the Lennard-Jones potential between surfaces [62], where s * is the adhesion well position, or the distance from the substrate at which a particle is in equilibrium. Thus, with the control variables λ i encoded inẑ. As a representative value for s * , we use the value that Xu and Buehler [63] give for C-Cu, namely 2.243Å. Similarly we use J 0 = 0.45 J m −2 as a representative value, from the investigation of Koenig et al. [7] into the adhesion strength between graphene and SiO 2 . We select a typical value for the hydrostatic pressure as p = 100 bar = 10 7 Pa.
Parametrization of the substrate topography
In writing an expression for the substrate geometry, a naïve approach would be to use the same type of discretization as for the graphene sheet itself. The domain Ω is discretized into a collection of triangles, and each of the state variables (w, v α , M αβ ) is posited to be continuous and affine over each triangle. This allows each variable to be described entirely in terms of its values at the nodes of the triangulation, and is the main advantage in writing the state equations in weak form §. However, this approach will not work when it comes to describing the shape of the substrate, z =ẑ(x, y). Since the graphene sheet can move in a lateral direction, the triangulations of the substrate and the sheet itself will not remain in registration. Therefore, for a given nodal point the distance measure w −ẑ(X + εv 1 , Y + εv 2 ) will not vary smoothly as v α are varied. The alternative, which we will follow here, is to construct a smooth shape for the substrate.
For the numerical experiments in this article, we assume that the substrate is patterned periodically in the two horizontal directions, and set the repeating 2D unit cell to be a rhombus. If we introduce the two coordinates ξ 1 = y √ 3 + x and ξ 2 = y √ 3 − x the unit cell corresponds to the domain The topography of the substrate,ẑ(x, y), can then be resolved into a truncated Fourier expansion with the period of the unit cell Ω. Since the (finite) set of expansion coefficients determines the overall topography, they play the role of the control variables λ i : varying the topography of the substrate is, therefore, achieved by varying these expansion constants (refer to Appendix D for particulars of this approach). The periodicity of the substrate places limits on the patterns of PMF that can be sought. If we integrate the PMF over the unit rhombus, we find because the strain fields generated by the periodic substrate will also be periodic. Thus (if we limit ourselves to periodic substrates) it is impossible to generate PMFs whose integral over the unit cell is nonzero -in particular this rules out the generation of strictly constant nonzero PMFs by periodic deformations. It should be emphasized, however, that our general procedure is applicable to arbitrary domains, geometries, § In this formulation the variations· are, for each nodal point i = 1, . . . , N p , the piecewise affine functions which take the value 1 at nodal point i and zero at each other nodal point. For a given weak form equation this provides N p equations for each of the N p unknown values of the function at the nodal points. and target PMFs, which require different parametric expansions of the substrate shape in place of the Fourier series employed here. The periodic choice is used by us simply out of convenience, precisely for its straightforward Fourier expansion that allows the description of an arbitrarily patterned substrate using mathematically simple trigonometric functions.
As discussed earlier, in order to avoid convergence towards solutions that are illbehaved during the numerical optimization, a regularization term, I reg , is added to the objective integral, as per equation (12) (in this case such ill-behaved solutions could be, for example, substrate profiles with discontinuities or sharp topographical features). We choose it to be which is simple to calculate using the orthogonality of the basis functions in the Fourier expansion over Ω. This expression measures the fineness of spatial variation in the substrate: I reg is larger for substrate profiles with smaller wavelengths. The minimization of (12) thus leads to the penalizing of such rough profiles that would be unrealistic in view of the finite feature resolution associated with any experimental approach to substrate patterning. Moreover, if a particular experimental implementation is to be carried out, the regularization term can be further refined or adapted to reflect the specific geometric, fabrication, or other constraints.
Results and discussion
For illustration we initially seek PMFs with a typical strength of 10 T. The unit cell is chosen to have edges of length L = 10 −8 m and we apply a pressure of 100 bar. These parameters might seem at the threshold of current experimental applicability, but they are chosen for their numerical tractability-since if s * is too small, standard numerical algorithms will iterate over trial configurations with negative graphenesubstrate separations s, a highly non-smooth problem. To overcome this issue, one would need to carefully design algorithms in which negative separations were avoided. But, for the proof-of-principle calculations reported here, we choose this acceptable compromise. We obtain good results for the numerical parameters K = 2, η = 10 −9 , and for an isometric mesh of 800 triangles in the unit cell. For illustration in this report we have chosen the following four target PMF patterns to optimize for: Results of the numerical optimization to make the PMF approach a prescribed spatial pattern with typical magnitude of 100 T. Column 1: the target PMF patterns (color scale in Tesla, on the left). Column 2: calculated PMF (left-hand color scale). Column 3: substrate topography associated with the PMF shown on its left (color scale in units of L = 10 nm, on the right). The unit cell is displayed in each image, and has edges of length L.
Each of these target fields (shown in the first column of Figure 3) adheres to the condition that its integral over the unit cell must be zero. In the second column we see the PMF attained by the optimization code, and in the third column we see the substrate topography that produced such a field. It is clear that the converged solutions reproduce with very good accuracy the spatial dependence of the target PMF in all four cases, including the rapid sign changes imposed by the target field which, being necessarily smooth in the solution, are still quite sharp, with the sign change occurring in a very short length scale. This documents how this optimization strategy is able to capture all the features, global and detailed, of the target PMF. From an experimental point of view, the power of this method is clear: by providing an accurate solution to the inverse problem, it allows one to specify in detail what substrate pattern and topography leads to a PMF of given magnitude and space dependence. This, in principle, provides all the experimental information needed to fabricate the corresponding structures.
It is important to recall that, as pointed out earlier, the solution to the pseudomagnetic inverse problem might not be unique. This is because the objective function (12) is defined in terms of the field B, whose relation to the deformation field expressed in equation (10) allows for a large "gauge" freedom. As a result, more than one set of control parameters within a certain parameter range might be simultaneously compatible with the target field within a desired accuracy and obey the elastic plate equilibrium equations. On the other hand, having found a set of parameters that optimizes the induced PMF to the target sought is not a guarantee that such set will remain optimal upon finite changes of an external variable, or a constant scaling of the target function. This latter aspect is best illustrated with a specific case for our example system. The panels in the last column of Figure 3 show the suitable substrate profiles for a pressure of 100 bar and PMFs with an amplitude of 10 T. The equivalent calculations for a target PMF magnitude of 100 T, a ten-fold increase, yield the results shown in Figure 4. It is clear that the optimal substrate topographies that guarantee the same degree of proximity between the induced and target PMF as before are markedly different from those in Figure 3. This means that in some experimental setups, such as the one sketched in Figure 2, the proximity of the induced PMF to the target might need to be compromised in favor of a having a fixed set of control parameters suitable in a range of PMF field amplitudes (i.e., a single substrate profile able to generate acceptable PMF profiles of different amplitude). However, the power of the method and its experimental practicality in allowing a direct, one-step route from PMF design to substrate fabrication, should largely compensate such compromises, when they are unavoidable.
There are many ways in which one can experimentally control the deformation of a graphene sheet [64], with each choice leading to a different set of control variables. Most obviously, the graphene sheet may be manipulated directly, whether by substrate topology (such as nanopillars) [33,6], a distribution of attached structures like nanotubes Three-dimensional plots of the optimized graphene topographies due to the substrate shapes in Figures 3 and 4, together with the resulting PMFs, are shown in Appendix F. [27], or a nano-manipulation of substrate adhesion properties [20]. The corresponding control variables would be the configurations of the nanostructures, including their shapes, their positions in relation to the graphene sheet, and their height. Edge actuation, where the control variables are the displacements applied to the edge, is another deformation mode [30,26]. In experiments, it may be preferable to apply these edge displacements indirectly, by applying electromechanical forces to the electrodes attached to a graphene flake [65]. The position and shape of the electrodes form the control variables in this case.
A further class of deformations in graphene are the inflation of bubbles by suspending graphene over a particularly-shaped cavity, whether by hydrostatic pressure or electromechanical forces [9,2]. Since the forcing in these examples is global, it is the shape of the cavity that provides the variation in the calculated strain field, and as such the control variables in an optimization calculation would be a parametrization of the cavity shape. Such inflation problems can be coupled with local deformation, in the form of a point deformation due to a STM tip [3,2], providing additional control variables of tip position and strength and allowing a greater ability to achieve desired strain fields and consequential electronic properties.
Finally, it should be noted that, despite our focus in this report on optimizing the control parameters for a target PMF (which constitutes the core of the strain-engineering concept in graphene), this optimization framework can be rather easily extended to other target quantities by replacing the objective function in equation (12) by the relevant measure of "distance" for that problem, and specifying its dependence on the strain or displacement field analogously to the specification in equation (10). Each of the three main components of the procedure-namely the objective function that is minimized, the state equations that form the constraints (here being the elastic plate equations), and the control variables that are open to experimental variation-can be changed to answer different questions of interest.
As simple examples, we suggest that one may wish to minimize or maximize the degree of rippling obtained in the edge actuation of a suspended graphene sheet [8], or that the resonant frequency of a graphene flake suspended over a cavity [66,67] may be optimized by varying the cavity shape according to the principles outlined in this exposition.
Summary
We presented a general-purpose framework suitable to answer the following inverse problem in graphene: which set of external control parameters (substrate topography, sample shape, load distribution, etc.) guarantees that the resulting equilibrium state of graphene exhibits a pseudomagnetic field that varies in space in a prescribed way? The ability to answer this question in general, given only a potential experimental setup and the target field profile, is paramount towards fulfilling the vision of tailored transport and other electronic properties in graphene by strain-engineering. This concept calls for expedited ways to answer the question above. The method presented here relies on a PDE-constrained optimization strategy to minimize the generic objective function (12) which penalizes significant deviations between the induced and target PMFs. It thus affords a one-step route from PMF design to experimental implementation, is unbiased and general enough to accommodate a multitude of experimental parameters and conditions that can be envisaged to produce the desired deformations in the graphene lattice, and always ensures compliance with the constraints imposed by elasticity theory and the equilibrium conditions of graphene treated as a continuous elastic medium. We trust that it can be an important tool in designing or guiding experimentally realistic conditions for strain-engineered graphene devices and beyond-the versatility to define, in principle, any target function for other physical quantities entails a broad applicability.
Appendix A. Graphene as an elastic continuum
Here we recapitulate the results of sections 2.1 and 2.2 with further discussion on their validity and applicability.
Appendix A.1. Definitions and assumptions
The deformation of a graphene sheet is described in terms of its deviation from a flat two-dimensional surface. A point X in an undeformed flat surface is defined by its coordinates (X 1 , X 2 , 0) = (X, Y, 0), for all (X, Y ) belonging to some set Ω that defines the physical domain. Under a deformation the point X = (X, Y, 0) is transformed to are the expressions for the new coordinates in terms of the in-plane displacements v α , and the vertical deflection w ¶. From these expressions one can define the base vectors r ,α = (x ,α , y ,α , z ,α ) and the metric tensor g αβ = r ,α · r ,β of the deformed surface. Then the true strain tensor is defined in terms of the difference of this metric tensor from its original value δ αβ (Kronecker's delta, or the identity tensor, from our choice of Cartesian coordinates): In terms of displacements, this becomes The second deformation measure of the surface is the curvature tensor, defined by ρ αβ = r ,αβ · n, where n is the unit normal vector to the deformed surface. We do not derive the result here (for details in the case of a curved elastic shell see Koiter [68] or Niordson [69]) but the full expression for the curvature tensor in Cartesian coordinates is where g = det g αβ . Expressions (A.3) and (A.4) are far too unwieldy for most purposes. Based on assumptions regarding the relative sizes of the displacement components and the length ¶ Our convention is to have all Greek indices ∈ {1, 2}. The coordinate system of the undeformed sheet is chosen to be Cartesian, and we will make extensive use of Einstein's summation convention throughout this report. Subscripts following a comma denote partial differentiation with respect to that coordinate. scale of deformations, the strain and curvature tensors are simplified. We chose the von Kármán approximation for its simplicity and its capacity to model moderate deflections. This simplification uses the expressions Denote the corrections to these approximations by ε corr αβ = ε true αβ − ε αβ , and similarly for ρ corr αβ . As an a posteriori check on the validity of our solutions, we can verify that the approximations are close to the true values, or The stress and moment resultants are assumed to be isotropic and linear in the strain and curvature tensors: ν being the Poisson ratio and C, D the stretching and bending moduli, respectively. We have defined σ to be the analog of the Poisson ratio for bending deformations; if D G is the Gaussian bending rigidity in the Helfrich free energy for the bending of a membrane [70], then σ = 1 + D G /D. To calculate the stretching modulus C, we use the results of Wei et al. [38], who fitted a polynomial stress-strain relation to ab initio calculations up to strains of 50%. For simplicity, we will assume a linear stress-strain relationship, which is valid only up to strains of around 10%. The linear terms of Wei et al. [38] are, in our notation, This value of ν agrees with the experimentally-measured Poisson ratio in graphite [39]. The value we chose for the bending modulus was that of Kudin et al. [40], calculated ab initio as D = 1.46 eV = 2.34 × 10 −19 N m. We have found only two investigations into the value of D G (and hence σ) in graphene; the calculations of Wei et al. [71] lead to σ = −0.056, whereas the numerical study of Koskinen and Kit [72] gives a value of σ = 0.565. In the absence of consensus, in our calculations σ is set to be equal to the Poisson ratio: σ = ν = 0.169, and thus B αβγδ = A αβγδ . The constitutive equations for macroscopic materials are usually derived from full three-dimensional isotropic elasticity expressions in the limit that the plate thickness is small. In the most rigorous treatments this analysis leads to limits on the validity of simplifying expressions such as (A.6) in terms of the relative sizes of stored elastic energy, applied surface tractions, and plate thickness [73,74,75]. Using such analyses, σ = ν and the stiffness and bending moduli may be written in terms of the three-dimensional Young's modulus E and the thickness h: .
Using a typical value h ≈ 0.3 nm for graphene thickness [41], this has been used to cite graphene's Young's modulus as of the order of 1 TPa [42,43,1]. While this may be useful to convey the scale and exceptional strength of graphene, the same numbers lead to an inaccurate value for the bending modulus D. Treating graphene as a continuous 3D elastic object is a convenient approximation, so for definiteness we keep the twodimensional parameters C and D as our main quantities here rather than express them in terms of Young's modulus E. A rigorous justification of the plate equations used to model graphene deformations is beyond the scope of this paper, and would involve a detailed analysis of the stored energy involved together with applied surface tractions.
For the purposes of this paper it is enough to ensure that the strains and curvatures are within reasonable limits (|ε αβ | < 0.1, |ρ αβ | < h −1 ) and that the corrections to the strains and curvatures are small.
Appendix A.2. Weak form equations
In section 2.2 we stated that equations (7a-7f) could be derived from a minimization of the energy integrals (6a-6b). In this section we justify this claim.
Recall that the stored energy in the plate was given by To help understand the mixed variational principles that we rely on, let us consider a simplified problem of purely transverse deflections of an elastic plate subject to hydrostatic pressure: Assume that the boundary of this plate is formed of three disjoint regions: ∂Ω = Γ c ∪ Γ s ∪ Γ f , with clamped conditions along Γ c , simply supported conditions along Γ s , and free conditions along Γ f . The standard variational approach is to minimize E t over the space of all twicedifferentiable w satisfying w = 0 on Γ c ∪ Γ s and ∂ n w = 0 on Γ c . The first variation of E t is δE t = Ω (DB αβγδ w ,αβw,γδ + pw) d 2 X, (A.15) wherew = δw is the variation in w. The weak solution is then the twice-differentiable function w(X, Y ) that satisfies δE t = 0 for each twice differentiable variationw satisfying w = 0 on Γ c ∪ Γ s and ∂ nw = 0 on Γ c . To find the strong form equation and boundary conditions to which this weak formulation corresponds, assume that w is four-times differentiable and integrate (A.15) twice by parts: using the boundary conditions forw. Setting this to zero for each admissible variatioñ w, we find that the governing equation is with boundary conditions This is the Euler-Lagrange equation associated with the minimization of (A.14).
Appendix A.2.1. Mixed variational principles In a standard variational principle, the weak form equations are found by minimizing the energy functional. In a typical mixed variational principle, a dual variable is selected and a new functional is introduced. For the simple plate bending problem above the dual variable is usually selected to be the bending moment tensor M αβ + . The variational functional is a version of the Hellinger-Reissner principle [44,47], given by Here is the inverse of B αβγδ . The weak form equations are derived from this principle by finding the stationary value of H over all functions M αβ and w satisfying the aforementioned conditions on Γ c and Γ s . Note that this stationary value of H will be neither a minimum nor a maximum; it is for this reason that these methods are often + Though Reinhart [46] and others have used the curvature tensor ρ αβ in place of M αβ , this merely results in a rearrangement of the governing equations, since one is a linear combination of the components of the other.
called saddlepoint methods. The weak form equations are found by setting the variation δH to zero, where Performing integration by parts one may once more recover the strong formulation (A.17) with the correct boundary conditions. However, the formulation (A.21) still requires a certain regularity of the deflectioñ w; broadly speaking the square of its second derivative must be integrable. Meanwhile, the only regularity required of M αβ is that its square must be integrable. One of the main advantages of the mixed variational approach is that it allows regularity to be transferred from the displacement to the moment. On integrating (A.21) by parts we obtain This functional is one that can be minimized over the space of all w and M αβ whose first derivatives are square-integrable. However, such w are unable to account for zero normal-derivatives on the boundary, so we encode that information directly in (A.24): since w = w n = 0 on Γ c , the boundary integral in (A.24) is zero over Γ c , and hence The weak form equations are obtained by finding the stationary value of (A.25) over the space of admissible functions satisfying w = 0 on Γ s ∪ Γ f ; in other words for all trial functionsw andM αβ satisfying these conditions. Again, these weak form equations lead to the same strong form (A.17) together with appropriate boundary conditions. This elementary exposition has omitted technical details regarding the regularity of the solutions; for a more rigorous consideration the reader is referred to Arnold [44], Blum and Rannacher [47], and Oukit and Pierre [48].
Appendix A.2.2. Application to nonlinear plate bending The application of these mixed variational principles to nonlinear plates was first analyzed by Miyoshi [45] and Reinhart [46]. They were interested in developing numerical methods to study the buckling of compressed plates. This meant that their boundary conditions were ones of applied force, which allowed them to use an Airy stress function approach, leading to coupled fourth-order differential equations. We are unable to use these equations directly as our boundary conditions are ones of zero-displacement, which is difficult to express in terms of the stress function. Instead, we simply add the in-plane stored elastic energy to the variational formulation (A.25), together with an arbitrary external potential. Writing N αβ = CA αβγδ ε γδ for simplicity, where the mixed variational functional we use is According to the discussion above, in order to derive the weak form equations we should find the stationary value of H over all admissible v α , w, M αβ satisfying v α = w = 0 on Γ c ∪ Γ s . The first variation of H can be straightforwardly derived along the same lines discussed above, whereupon one obtains These equations then lead naturally to the weak form equations (7a-7f) on assuming that the entire boundary is clamped (Γ s = Γ f = ∅), that σ = ν, and on writing out the equations for the six componentsw,ṽ α andM αβ explicitly. The equations hold for all continuous integrable variationsṽ α ,w,M αβ that satisfỹ v α =w = 0 on the boundary.
Appendix B. Coupling deformations to electrons: PMFs
To the six weak-form elastic equations derived in Appendix A.2 we must add an equation linking the strain field to the generated PMF, B(X). This is because we wish to find the deformation field that best approximates B(X) to a desired (target),B(X). The origin of this PMF that appears in the low-energy effective Hamiltonian of deformed graphene is the local modification of the electronic hopping amplitudes between neighboring carbon atoms brought about by the space dependent deformation of the crystal lattice.
A single orbital nearest-neighbor tight-binding model for the π bands derived from electronic hopping among p z orbitals of neighboring carbons has been extremely successful in describing the behavior of electrons in graphene, and their response to various kinds of external perturbations and fields [17]. The Hamiltonian that reflects this physics is given by The bipartite nature of the honeycomb lattice is evident in this expression by the explicit distinction between the lattice sites belonging to sub-lattice A or B. The secondquantized operator a X i (b X i ) destroys an electron in a p z orbital that belongs to a carbon atom located on site A(B) of the unit cell placed at X i . The parameter t(X i , X i + n) is the hopping amplitude between two neighboring π orbitals, and n runs over the three unit cells containing a B atom neighboring the A atom from the unit cell at X i . The hopping amplitude is constant in the perfect crystal: t(X i , X i + n) = t 0 = 2.7 eV. But, since t depends strongly on the inter-atomic distance, any local change caused by a deformation leads to perturbations to this equilibrium value and, hence, more generically, t(X i , X i + n) = t 0 + δt(X i , X i + n). The presence of δt, which is a relatively small perturbation to t 0 in practical situations, adds a correction to the low-energy Diraclike Hamiltonian that emerges from (B.1) so that the effective Hamiltonian around the point K = (4π/3 √ 3a, 0) in the first Brillouin zone has the form [15,16] where σ is a vector of Pauli matrices, and v F = 3t 0 a/2, with a = 1.42Å the carboncarbon distance in equilibrium. For deformations on scales that are large compared to a, the curvature-induced tilting of neighboring p z orbitals can be neglected * . In this situation the hopping amplitude t depends only on the distance between neighboring atoms, and we straightforwardly obtain the components of the vector potential A = A x e x + A y e y by expanding t to linear order in the deformation tensor. Choosing the coordinate system so that e x is along the zig-zag direction of the honeycomb lattice one obtains [16] A where c = −∂ log t(r)/∂ log r| r=a . For static deformations, a value c ≈ 3.37 captures the changes in various physical properties arising from strain-induced modifications of the π bands in agreement with first-principles calculations [49,50,51,52,53,54]. Finally, the pseudomagnetic field B, being defined as the 2D curl of A, reads: (B.4) * Note, however, that this is not a restriction on the applicability of the method. The assumption of small deviations from the planar configuration is for convenience and definiteness only. A full parametrization of the hopping modifications including curvature-induced re-hybridization would be dealt with in precisely the same way, because the only ingredient that is needed is the dependence of the PMF B on the strain components. The central and only requirement is the ability to explicitly specify this dependence, as done in equation (10) under the stated conditions.
As noted in the previous section, by virtue of our choice of piecewise affine finite elements for the numerical interpolation, the six variables v 1 , v 2 , w, M 11 , M 12 , and M 22 are treated as continuous, and affine over each triangular element. As a consequence, from (A.5) the strain field will be discontinuous and constant in each triangular element. As the strain components ε αβ are discontinuous under this approximation, the PMF (B.4) using this scheme is undefined. To overcome this difficulty, we use the technique of patch recovery. For details of the technique, first described by Zienkiewicz and Zhu [58], we refer the reader to Appendix E. In brief, this is a mechanism that uses the discontinuous strain data ε αβ to recover a strain field ε rec αβ of the same type as the primary variables: continuous and affine over each element. The derivative of ε rec αβ is well-defined, and thus so is the PMF if it is calculated using this recovered strain field:
Appendix C. Nondimensionalization
In solving the optimization problem of section 3 numerically, the first step is to nondimensionalize the system of equations in such a way that most variables are O(1) to ensure good numerical behavior. To accomplish this we choose the following scalings, where an overbar represents the dimensionless quantity. Set to be the typical scaling of the strain field, then All these constants are previously-defined, with the exception of L, representing the typical size of the domain Ω, and B 0 , the typical magnitude of the target PMFB. Under these scalings the equations exhibit only one dimensionless parameter, namely the dimensionless bending stiffness κ: For completeness we will summarize the minimization problem in its dimensionless form: and the three dimensionless parameters are given bȳ As a representative value for s * , we use the value that Xu and Buehler [63] give for C-Cu, namely 2.243Å. Similarly we use J 0 = 0.45 J m −2 as a representative value, from the investigation of Koenig et al. [7] into the adhesion strength between graphene and SiO 2 . We select a typical value for the hydrostatic pressure as p = 100 bar = 10 7 Pa.
The derivatives of the potentials appearing in the dimensionless weak form equations arē (C. 23) In section 4.2 and subsequently, all variables are assumed to be dimensionless, and overbars are omitted for clarity.
Appendix D. Parametrization of the substrate topography
For the numerical experiments in this article, we assume that the substrate is patterned periodically in the two horizontal directions, and set the repeating unit cell to be the rhombus depicted in Figure D1. If we introduce the two coordinates the unit cell corresponds to the domain The topography of the substrate,ẑ(x, y), can then be resolved in terms of a sum of functions which are periodic on the unit cell Ω. Such functions take one of the four following forms: and so we choose a truncated expansion in terms of these as follows: with the constants α kl , β kl , γ kl , δ kl playing the role of the control variables λ i : varying the topography of the substrate is, therefore, achieved by varying these 4(K + 1) 2 constants. For convenience we set for each k and, since rigid vertical displacements of the substrate do not affect the objective function, we further set α 00 = 0. In order to avoid convergence towards solutions that are ill-behaved (in this case those could be, for example, substrate profiles with discontinuities or sharp topographical features) during the numerical optimization, a regularization term, I reg , is added to the objective integral, as per equation (12). We choose it to be I reg = 1 Area(Ω) Ω |∇ẑ| 2 d 2 X which is simple to calculate using the orthogonality of the basis functions f n kl over Ω. The domain Ω for the graphene sheet will also be the unit rhombus, with periodic boundary conditions applied to all six state variables. However, we set the displacement components v 1 = v 2 = 0 at the corner points to disallow arbitrarily-large horizontal rigid displacements. This is a reasonable constraint on account of the two-dimensional periodicity of the substrate. Due to the geometry of the unit rhombus, we can set the triangulation to be a regular isometric grid.
Appendix E. Strain recovery
As noted in Appendix A, we choose a finite element discretization for our six variables v 1 , v 2 , w, M 11 , M 12 , M 22 that approximates these quantities with functions that are continuous across the domain Ω, and affine over each triangular element in the discretization (see Figure E1(a) for a representation of such a function). Thus the quantities can be parametrized by their values at each nodal point of the triangulation. Differentiating such a function leads to a discontinuous function, which is constant on each triangular element, as shown in Figure E1(b).
The piecewise constant function is a less accurate approximation than the continuous piecewise affine function, and this led to the patch recovery method [58], which reconstructs an accurate continuous piecewise affine representation of a quantity calculated as a piecewise constant function. The canonical example where this recovery method becomes relevant is in elasticity with piecewise affine displacements leading to a piecewise constant stress field. The original purpose of the patch recovery method was to find a better approximation to the stress field calculated from a displacementbased finite element method. In this article a piecewise affine displacement field leads to a piecewise constant approximation ε αβ to the strain field, whereas we require a differentiable approximation. By using the patch recovery method we recover a piecewise affine strain field, ε rec αβ , which we are able to differentiate to find the PMF B according to the prescription in equation (B.5).
To illustrate the patch recovery method, consider a triangulation of the domain Ω which defines the spatial extent of the medium with triangles k = 1, . . . , N t and nodes i = 1, . . . , N p . We have a function f , constant on each element (so f (X, Y ) = f k if X is in triangle k), from which we want to recover a piecewise affine function f rec (defined by its values f rec i at each node X i of the triangulation). The strength of the patch recovery method is that the nodal values f rec i are calculated individually in turn, rather than in a global optimization over all values at once. For each nodal point i, we identify the patch, which (for triangular elements with a piecewise affine target) is the set of all elements that contain the node i as a vertex, as displayed in Figure E2. The key step in the process is to fit a function f fit i (X, Y ) to the patch for node i that is of the same order as the proposed target function. So, in this case, we need to fit an affine function f fit i (X, Y ) = a + bX + cY to the patch. We use the values f k , evaluated at the centroids (X c k , Y c k ) of the elements, to calculate the parameters (a, b, c) = a T through a least-squares optimization. As noted in Zienkiewicz and Zhu [58], a is thus found by solving the system k∈patch(i) where p k = (1, X c k , Y c k ) T . Having found f fit i (X, Y ), the nodal value of the recovered function f rec is simply f rec i = f fit i (X i , Y i ), the fit function evaluated at the nodal point. At the domain boundaries there will usually be too few elements in the patch for the system (E.1) to be well-conditioned. Instead, we would follow Zienkiewicz and Zhu [58] and find the boundary nodal values of f rec by using the interior patches, and average over all the calculated values. This consideration does not apply for periodic boundary conditions, since in that case we can treat the entire domain as being of infinite extent, and all points are interior points.
Appendix F. Three-dimensional plots Figure F1 shows three-dimensional visualizations of deformed graphene sheets corresponding to the four solutions of Figure 3, where the target PMF value was 10 T. Vertical scales are exaggerated for clarity.
The corresponding visualizations for B = 100 T (corresponding to Figure 4) are shown in Figure F2. In this case the vertical scale is not exaggerated. Figure F1. Three-dimensional plots of graphene sheets deformed by the four substrates in Figure 3, colored with the resultant pseudomagnetic fields. Vertical scales in some of the plots are exaggerated for clarity, by factors of 2, 3, 1 and 3 respectively. | 14,459.2 | 2014-09-16T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
A versatile electrical impedance calibration laboratory based on a digital impedance bridge
. A four-terminal pair fully digital impedance ratio bridge is described. Its principle of operation including balancing procedure and calculation of unknown impedance value is presented. An initial 1:1 ratio check of the bridge has been performed. Systematic errors of the bridge design and actions to deal with these errors are shown.
Introduction
Calibration of artefact impedance standards such as standard capacitors, standard inductors and AC resistors, (figure 1) has been carried out for decades on the primary level. These calibrations have mainly been performed at well-established National Metrology Institutes (NMIs) in the field of DC resistance and impedance metrology. In the field of electrical measurement there is an ever growing need for precise and accurate capacitance and inductance measurements, that are traceable to the SI. Only a limited number of countries world-wide have a primary impedance laboratory, that is capable of realizing the capacitance and inductance scales traceable to the SI with measurement uncertainties at the 10 -6 level or below. The reason for this is that the realisation at this level demand numerous setups, none of which use exclusively off-the-shelf instruments. Design, construction and operation of primary level setups exceed the budget of most laboratories and demand highly skilled manpower [1,2]. Most NMIs or Designated Institutes (DIs) in countries like Denmark have a large set of standard capacitors, standard inductors and standard resistors. These standards are annually shipped to a primary level NMI for calibration, to get traceability to the SI and be able to provide a calibration service for the NMI's or DI's customers. The author's laboratory has like other laboratories experienced unexpected jumps in the values of the standards. Jumps which after some investigation are expected to come from transportation effects such as temperature and humidity fluctuations or mechanical shocks. Due to this and the fact that good quality off-the-shelf digital sources and digitizers are widely available a new generation of digital impedance bridges has seen the light of day. On a general level there are two types of digital impedance bridges, sourcing bridges (DAC-based) [3] and sampling bridges (ADC-based) [4]. Two European research projects have been dealing with this subject. In 2013 -2016 a Joint Research Project (JRP) in the framework of "The European Metrology Research Programme" titled "Automated impedance metrology extending the quantum toolbox for electricity" (AIM QuTE) had the goal to extend Josephson impedance bridges capabilities to the whole complex plane with the best level of uncertainties (0.05 ppm) and in parallel to develop fully digital bridges to reduce the operator workload imposed on national metrology institutes for the realisation of the impedances scales and for calibrations [5]. A currently running follow-up JRP in the framework of "The European Metrology Programme for Innovation and Research" titled "A Versatile electrical impedance calibration laboratory based on digital impedance bridges (VersICaL)" has the goal to expand the application of digital impedance bridges among developing NMIs and DIs. The project consortium includes experienced NMIs that have participated in the AIM QuTE and who will assist the less experienced partners to create and operate an in-house realisation of the capacitance and inductance scales based on digital impedance bridges [6]. This paper will present the current state of the Trescal Ltd. Denmark's sampling based digital impedance bridge.
Setup
The Trescal four-terminal pair (4TP) sampling based fully digital impedance ratio bridge consists of off-the-shelf available equipment in addition to a home build coaxial multiplexer (MUX). Figure 2 shows a schematic diagram and a photograph of the bridge when measuring the ratio of two 12.906 kΩ resistors. This bridge can compare any type of impedances R-R, C-C, L-L, R-C, R-L, and C-L and it can be constructed in minutes. It's basically a plug and play solution when having the right software tool for instrument control and data analysis. To be able to compare a 4TP reference AC resistor to a 3T inductor or capacitor one needs an adapter. In this case adapter corrections have to be applied to the measurement results [7].
Principle of operation
A 4TP impedance is defined as [1] , with the defining conditions and ≡ 0.
The software is set to balance the bridge by having E1 fixed and adjusting E2 and E3 until the defining conditions stated above comply for both impedances Z1 and Z2.
Balancing
Various types of balancing algorithms have been presented in the literature. After testing several algorithms, it was decided to use the secant method which is one of the simplest ones [8]. Sources and detectors are matched in pairs. D1 is paired with E2 because adjustments of E2 have the most significant effect on D1. E3 is matched with a pseudo detector D4 = D1 -D2. When D1 is zero and D4 is zero it follows that D2 is zero. E2 and E3 are adjusted sequentially according to the secant method until the root sum of squares of the real and imaginary parts of D1 and D4 is less than 10 -6 of .
Final measurement
When the balancing procedure is complete the complex impedance ratio is equal to the complex voltage ratio of the HP ports of the impedances (5) The two complex voltages are both measured by the same digitizer in a timespan of less than one second by changing the position of the MUX. Comparing equal impedances will result in equal voltages on the HP ports on the impedances, and . Any offset or gain error of D3 will in this case cancel out.
In reality, one voltage ratio comes from two measurements. First a 'forward' measurement is performed where the reference is positioned at Z1 and the device under test (DUT) is positioned at Z2. This is followed by a second measurement 'reverse' where the reference is positioned at Z2 and the DUT is positioned at Z1. This is done to cancel out any inequality in the measurement setup. The result is the average of the forward and the reverse measurement. Knowing the resistance value at a given frequency and the time constant of a reference AC resistor one can calculate the AC resistance, capacitance or inductance of an unknown device [7].
Initial check
An initial check of the new bridge has been performed by comparing two 12.906 kΩ AC reference resistors which have been calibrated at METAS in Switzerland. The certificate values are R1 = 12.90617 kΩ and R2 = 12.90631 kΩ respectively. Figure 2(b) shows the setup of this measurement. R1 was set to be the reference of this comparison. The value of R2 in the forward configuration was measured to be 12.906310 kΩ and 12.906313 kΩ in the reverse configuration. The average value of R2 was 12.906312 kΩ which was satisfactory for this test
Systematic errors and actions
The setup is subject to two significant systematic errors. When comparing non-equal impedances the linearity error of the main detector, D3, directly affects the impedance ratio. A method to determine the error of the digitizer and correct for it on a sufficient level of accuracy has been presented for high frequency impedance bridges [9]. A series-parallel capacitive device has been constructed to measure the D3 linearity error, but is yet to be tested, see figure 3. The second systematic error occurs at high impedances e.g. 1000 pF which has a reactance of 1.6 MΩ at 100 Hz. The input impedance of the main digitizer in differential configuration is 1 MΩ || 217 pF. In this case the 4TP defining condition given in equation (2) is no longer fulfilled, as current will flow through the digitizer rather than the impedance itself. At the time of writing a buffer amplifier having an input impedance of 400 GΩ is being added to the output of the MUX. This will ensure that the defining condition in equation (2) is fulfilled at all times. The result of this modification is yet to be tested.
Conclusion
The new Trescal four-terminal pair (4TP) sampling based on a fully digital impedance ratio bridge has been presented. Initial tests have shown that point of concept is working well for a 1:1 comparison. Systematic errors have been identified. Ideas to correct for the systematic errors have been presented, but are yet to be tested. | 1,918 | 2019-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Computational Model of Erratic Arrhythmias in a Cardiac Cell Network: The Role of Gap Junctions
Cardiac morbidity and mortality increases with the population age. To investigate the underlying pathological mechanisms, and suggest new ways to reduce clinical risks, computational approaches complementing experimental and clinical investigations are becoming more and more important. Here we explore the possible processes leading to the occasional onset and termination of the (usually) non-fatal arrhythmias widely observed in the heart. Using a computational model of a two-dimensional network of cardiac cells, we tested the hypothesis that an ischemia alters the properties of the gap junctions inside the ischemic area. In particular, in agreement with experimental findings, we assumed that an ischemic episode can alter the gap junctions of the affected cells by reducing their average conductance. We extended these changes to include random fluctuations with time, and modifications in the gap junction rectifying conductive properties of cells along the edges of the ischemic area. The results demonstrate how these alterations can qualitatively give an account of all the main types of non-fatal arrhythmia observed experimentally, and suggest how premature beats can be eliminated in three different ways: a) with a relatively small surgical procedure, b) with a pharmacological reduction of the rectifying conductive properties of the gap-junctions, and c) by pharmacologically decreasing the gap junction conductance. In conclusion, our model strongly supports the hypothesis that non-fatal arrhythmias can develop from post-ischemic alteration of the electrical connectivity in a relatively small area of the cardiac cell network, and suggests experimentally testable predictions on their possible treatments.
Introduction
Understanding the basic cellular mechanisms underlying cardiac pathophysiology is of increasing importance, as the aging of the population predicts an increasing prevalence of cardiac morbidity and mortality. For this purpose, an important complement to experimental and clinical investigations is the mathematical modeling and simulation of the mechanisms responsible for cardiac electrophysiology [1], especially those that can underlie alterations in the propagation of electrical activity leading to arrhythmias. Although there are many types of arrhythmic cardiac behavior, most of the current models give emphasis to those with severe or fatal complications, such as atrial fibrillation [2] or ventricular fibrillation [3], and are based on the so-called reentry model. Reentry was first defined by Mines [4] as a persisting electrical impulse that reactivates an area of previously activated myocardial tissue that is no longer refractory, resulting in a circular movement of activation. The length of the circle depends on the impulse wavelength, defined as the product of the refractory period and conduction velocity (plus an excitable gap when present) [5]. The requirements for reentrant activation in the intact heart are a region of unidirectional block and a (regionally) slow-enough conduction velocity allowing an impulse to travel around or inside the affected region. The ultimate proof of reentry is its termination by interruption of the circle [4]. Our understanding of reentry has been extended by the introduction of different initiation mechanisms such as single rotor reentry [6], fibrillatory conduction [7][8], and the leading circle concept [9]. These mechanisms have been recently explored [3], cardiac tissue simulators have been presented [10], and the use of modeling in helping clinical practice has been suggested [11][12][13].
To explain the mechanisms underlying the initiation of a reentrant arrhythmic behavior, most models assume permanent changes in the intrinsic electrophysiological parameters of cardiac cells, such as altered intracellular Calcium dynamics [14] or ion channel modifications [15]. Other models consider alternative mechanisms, such as mitochondrial membrane potential oscillations and waves [16], or the roles played by individual sarcolemmal ion channels in atrial and ventricular fibrillation [17]. The major problem with these approaches is that the arrhythmic behavior is reproduced in an all-or-none fashion. In these models, the arrhythmia (usually a tachyarrhythmia) is often systematic, triggered by a single stimulus and, once initiated, does not stop spontaneously. This contrasts with what is observed in clinical practice, where thousands of relatively brief non-fatal episodes of arrhythmias occur during the life of a subject. Isolated premature ventricular beats (iPVB), bi-or tri-geminy sequences, couplets, triplets, and ventricular tachyarrhythmias (see the MIT-BIH Arrhythmia data base [18]), are commonly observed during the life of a subject without immediate life threatening conditions, although an increased rate of premature ventricular beats have been associated with an increased risk of sudden death in patients with heart failure [19]. Since these conditions are widespread in the population, it is important to investigate the malfunctioning mechanisms and how they can be treated.
In this paper, we explore with a computational model the possibility that non-fatal arrhythmias can originate from postischemic dynamic alteration of the electrical connectivity in a relatively small area of the cardiac cell network. The simulation findings show that random fluctuations of the intercellular gap junction conductance inside an ischemic area are sufficient to generate practically all of the observed types of transient arrhythmia. The model suggests possible treatments to reduce or eliminate these conditions. We modeled a relatively small two-dimensional cardiac tissue of 12.864.1 mm, composed of 1286256 cardiac cells implemented as a single-compartment of 100616 mm, corresponding to the real size of canine ventricular myocardial cells [21]. Electrophysiological passive and active properties were identical to those used in the Beeler-Reuter model [21], with model files downloaded from the public ModelDB database (http://senselab.med.yale.edu/ modeldb, acc.n. 97863). This is one of the simplest realistic electrophysiology models for a single ventricular canine myocyte. It describes the cell activity on the basis of 4 trans-membrane currents: a sodium current, two potassium currents, and a calcium current which is responsible for the plateau potential occurring during a cell's depolarization. A typical action potential generated in this cardiac cell model by a short (4 nA, 5 ms) current injection is shown in Fig. 1A. Each cell was connected via gap junctions with its 4 neighbors, as schematically represented in Fig. 1B. A gap junction between any two given cells under control conditions was modeled as a bidirectional, time-independent, ohmic conductance of 30 nS under control conditions, in agreement with experimental data [22] and within the wide range experimentally measured in mammals, which ranges from the 500 nS measured in ventricular pairs to the 8 nS in SA nodal pairs [23][24].
Methods
To model the abnormal conditions underlying different types of arrhythmias, we first implemented an ischemic area by altering the gap junction rectifying conductive properties (Fig. 1C) of the cells along the edges of the ischemic area (as illustrated in Fig. 1D). The gap junction conductance of the cells inside the area was reduced to a lower average value, which randomly fluctuated with time during a simulation (see Results). The complex behavior of gap junction conductance involved in cardiac function has been reviewed in Moreno [25]. Rectification of the gap junction conductance has been experimentally observed in HeLa cells [26][27], whereas fluctuations in the conductance during ischemia have been observed in dogs [28], rabbits [29] and humans [30]. This is the first time that the effects of time dependent fluctuations is investigated in the context of erratic cardiac arrhythmias.
The properties of an ischemic area were constrained by experimental observations. In particular, Peters et al. [31] found altered gap junctions as part of the early remodeling of myocardium after inducing infarction and ischemia in 6 dogs.
Another study [32] proposed the existence of an entry and an exit door somewhere along the border of an ischemic region. We thus implemented a generic ischemic area as a region with propagation properties slower than normal tissue [33] (represented in yellow in Fig. 1D). The affected area was delimited by an almost completely closed contour which blocks signal propagation (black cells in Fig. 1D). Entry and exit doors were implemented along the contour with two small sections (grey cells in Fig. 1D) having strong rectification properties which allow only mono-directional communication between the normal external tissue and the ischemic inner region. Two different ischemic areas, of the same width (3.9 mm, corresponding to 39 cells) but different height (0.5-0.75 mm, corresponding to 30-45 cells) have been simulated.
During a typical 130 sec simulation, a pacemaker signal was generated by a periodic (every 800 ms) short current injection (4 nA, 5 ms) to cell (1,1) (indicated with a red marker in Fig. 1D). Gap junctions of cells belonging to the normal region were fixed to their control value of 30 nS, whereas the values of each gap junction inside the ischemic region were randomly chosen, every 500+/250 ms, from a normal distribution with a given average and variance. Change the gap junction conductance to an average interval shorter than the normal periodic signal simplifies the analysis of the results. However, different intervals and mechanisms were also tested (see Results). Several combinations of gap conductance average value (range 4.5-5 nS) and variance (range 0.3-0.8 nS 2 ) were tested. The membrane potential of cell(25,100) (indicated with a blue marker in Fig. 1D) under control conditions (i.e. no ischemic area) is shown in Fig. 1E. A movie illustrating the propagation of the activity following an external stimulation of cell(1,1) is reported in Movie S1.
It is important to stress that a number of additional mechanisms can be affected by an ischemic episode. Virtually all of them, from increased intracellular acidity [34] to changes in channel functioning [35], may independently contribute to the emergence of a PVB, and can lead to the generation of life-threatening arrhythmias. This is precisely why we did not include them in the model, at this stage. Rather, we were interested in studying the role, and isolating the effect, of random gap junction fluctuations. This is a process that is quite difficult to study experimentally. In this paper we have chosen to investigate only the functional consequences of an ischemic episode on gap junctions. It would be interesting to include the modulation of other mechanisms in a future study, to study how and to what extent they affect the basic findings shown in this paper.
Comparison with Experimental Findings on Non-fatal Arrhythmias
For a qualitative comparison of our model with experimental data we selected several representative electrocardiographic signals (ECG) from the Physionet Data Base [36] and shown in Fig. 2. In particular, we considered several 10 sec recordings from different patients with non-fatal arrhythmias commonly related to increasing cardiac electrical activity deterioration. These include: single premature ventricular beat (PVB), trigeminy sequences (a sequence of normal and premature beats with the ratio of 2:1), bigeminy sequences (a sequence of normal and premature beats with the ratio of 1:1), couplets (two consecutive premature beats), triplets (three consecutive premature beats) and, finally, a short run of non-sustained Ventricular Tachycardia (VT, a sequence of more than 3 premature beats that spontaneously recover to the normal condition).
Results
The main mechanism underlying the generation of premature beats suggested by our model is a direct consequence of the differential signal propagation inside an ischemic area. This process is illustrated in Fig. 3, where we show a few snapshots from Movie S2 and the membrane potential of cell(25,100) (indicated with a yellow mark in Fig. 3), outside the ischemic area. Under control conditions (i.e. without an ischemic area) each wave of activity generated by pacemaker cells will freely flow without interference (as shown in Movie S1). In the presence of a lesion (delimited by the yellow lines in Fig. 3), the signal propagation around the scar (see snapshots A-C and the corresponding time points in the bottom plot of Fig. 3) generates a secondary wave of activity inside the ischemic area (Fig. 3, snapshots D-E). Once it reaches the normal region (Fig. 3, snapshot F), it causes the generation of a premature beat. The activity spreads backward (Fig. 3, snapshots G-H) and negatively interferes with the generation of the expected beat in cell(1,1) (green arrow at t = 1800 ms in the bottom plot of Fig. 3), which is still within the refractory period. Activity returns to normal afterwards (Fig. 3, snapshot I). The arrhythmia does not occur every heart cycle because the gap junction random fluctuations do not allow reliable propagation of the signal inside the ischemic area (see Movie S2). These results show that an alteration in the gap junction conductance, caused for example by an ischemic episode in a relatively small area of cardiac tissue, can generate premature beats leading to arrhythmias.
To investigate this mechanism in greater detail, we carried out a systematic set of simulations using different values for the average and variance of the gap junction conductance inside the ischemic area. It should be stressed that, in all cases, the normal electrophysiological properties of all cells were not changed. Typical simulation findings exhibiting different kinds of arrhythmia are shown in Fig. 4A, where we plot selected excerpts of single cell recordings (cell 25,100) from simulations using different values for the average and variance of the gap junctions conductance in the ischemic area. The different types of arrhythmias where classified as shown in Table 1, by considering the sequence of interspike intervals having an abnormal duration with respect to that expected for normal cells. As can be seen, the model was able to qualitatively reproduce all types of experimentally observed arrhythmias (see Fig. 2). During each simulation we observed that, just as it occurs in the real system, different types of arrhythmias can appear at different times, and that their relative proportion depended on the average and variance of the gap conductance. A typical example is shown in Fig. 4B, where we show the membrane potential of cell(25,100) during a 25 sec time window of a simulation with a gap conductance of g = 4.760.7 nS. More or less organized premature beats appear throughout the simulation (labels above the trace in Fig. 4B). Taken together these results demonstrate that a single mechanism, namely the dynamical fluctuation of the gap junction conductance inside an ischemic area, is able to explain practically all kinds of non-fatal arrhythmias experimentally observed in cardiac cells.
A more systematic exploration of the gap conductance parameter space is presented in Fig. 5A, where we report the proportion of PVB as a function of the average and variance of the fluctuations inside the ischemic area. The range of values for the gap conductance reproducing the arrhythmias is drastically lower than the value for normal cells. To the best of our knowledge, there are no direct measurements of the gap conductance within a cardiac ischemic area, except for the obvious case of dead cells (scars), which can safely be assumed to have a 0 conductance. Since one of the conditions that may affect this value is an anisotropic distribution of gap junctions on individual cells [37], we tested different anisotropy ratios (calculated as the ratio between the longitudinal and transversal gap conductance in a given cell). For this purpose, we started from a configuration with an average total gap conductance in each cell of 464.5 = 18 nS. An isotropic distribution resulted in 5% of PVB (Fig. 5A, left). As shown in Fig. 5A (right), a similar proportion of PVB (red labels in Fig. 5A, right) was obtained for increasing values of anisotropy with a corresponding increase in the average gap conductance. Interestingly, we found that the relative distribution of premature beats corresponding to the different kind of arrhythmias can be directly related to the variance of the gap conductance fluctuations. This is illustrated in the top plot of Fig. 5B, where we show that (for an average gap conductance of 4.7 nS) a progressively higher variance in the fluctuation results in a decrease of isolated premature beats (Fig. 5B, iPVB, white bars), and an increase of those involved with episodes of bigeminy, couplet, triplet, and tachycardia. The same effect, although less pronounced, was observed as a function of the average value (with a fixed variance Fig. 5B). These results suggest that random fluctuations in the gap conductance inside an ischemic area can promote and modulate the development of specific types of arrhythmic behavior.
To investigate the possible effects of ischemic areas of different sizes and positions with respect to the propagation of the normal electrical activity, we carried out a set of simulations using a different cell (O2 instead of O1, Fig. 6A) for the pacemaker stimulation. Simulations were also carried out using a 30% smaller area (Fig. 6A, Is_S), involving 1200 instead of 1800 cells. In these cases, the average value for the gap conductance was fixed at 4.7 nS. The results are shown in Figure 6B, where the fraction of premature beats is plotted as function of conductance variance. They suggest that when an ischemic episode creates a sufficiently large area of altered gap junctions (Fig. 6B, Is_B plots), the occurrence of premature beats does not depend on the direction of propagation of the physiological electrical activity. In contrast, a relatively smaller area (Fig. 6A, Is_S), may be much more sensitive to the propagation direction (Fig. 6B, Is_S). This occurs because the signal propagation delay inside a larger damaged area is Fig. 2; B) 25 sec simulation exhibiting different types of arrhythmic behavior. In all cases, red markers highlight abnormal sequences (see Table 1); traces represent the membrane potential of cell(25,100), from simulations with the following average gap junction conductance and variance: (4.7, 0.3) iPVB, (4.9, 0.3) trigeminy, bigeminy, and triplet, (4.7, 0.6) couplet, (4.7, 0.8) VT. doi:10.1371/journal.pone.0100288.g004 sufficiently long to reach the normal region after the end of the refractory period of the normal tissue, independently from the origin of the external stimulation; for a smaller area (such as Is_S) this occurs when the signal arrives from O1 but not from O2. The phenomenon is illustrated in Movie S3 and Movie S4, respectively. Furthermore, consistent with the experimental findings of Woie et al. [38] showing that larger myocardial infarction areas lead to slower ventricular tachyarrhythmias, in our simulations the average interval during tachycardia events increased from 362613 ms for the smaller Is_S case, to 545621 ms for the larger Is_B.
To show the robustness of the results we carried out additional simulations. Starting from the typical configuration discussed in Fig. 3 we applied different variations, one at a time. We tested anisotropic gap junctions [37] concentrated at the ends (Movie S5) or at the sides (Movie S6) of the cells, 10% fluctuation of gap junctions in the entire tissue (Movie S7), and shorter (250 ms, Movie S8) or longer (1000 ms, Movie S9) average intervals for gap conductance fluctuations. One particularly intriguing possibility is for gap junction fluctuations to be an activity-dependent process. The implementation of a long-or short-term plasticity mechanism (analogous to what occurs in chemical synapses) was outside the scope of this work. However, as proof of principle, we ran an additional simulation in which gap fluctuations (4.760.5 nS) in each cell occurred on the rising phase of the action potential, corresponding to the heart contraction. We obtained a proportion of PVB similar to that obtained with the time-dependent fluctuations (19% vs 25%). Taken together these results demonstrate the robustness of our model under different conditions, with the emergence of a number of PVBs at random times in all cases.
Finally, assuming that ischemic areas can be modeled with regions of cells with malfunctioning gap junctions, we considered ways to reduce or eliminate premature beats and, thus, arrhythmias. Clinically, severe cases of ventricular arrhythmias are treated by the relatively minor surgical procedure of radiofrequency ablation [39], whereas quite extensive and invasive maze cuts are used for cases of atrial fibrillation [40].
Our model suggests that premature beats can be eliminated in three different ways: a) by closing the exit door from the ischemic region, as it is usually done at the hospital with the radio-frequency ablation procedure, b) by pharmacologically opening the entry/ exit doors to the ischemic area, and c) by pharmacologically decreasing the gap junction conductance. To implement the surgical procedure (case a), we created additional lesions by setting to 0 the gap junction conductance of the cells surrounding the exit door of the ischemia. This precluded the propagation of an ischemic beat outside the ischemic area and prevented the generation of premature beats (see Movie S10). Case b) was implemented by strongly reducing the gap rectification property of the cells involved with the entry/exit doors, making them similar to those of normal cells. This practically removed the entry/exit doors, and had the effect of stopping the generation and propagation of abnormal beats outside the ischemic area (Movie S11). The remaining propagation of abnormal electrical activity inside the ischemic area (and the corresponding heart contractile activity) might explain why the infarction zone may appear relatively small during myocardial ischemia/reperfusion injury in open-chest dogs [41]. To implement case c), we ran a simulation in which all gap junction conductances (including those of normal cells) were reduced by 10%. The results (Movie S12) show that the normal propagation of the signal was unaffected, but its propagation inside the ischemic area was severely hindered. A 10% change may seem relatively small to result in macroscopic effects, but it should be considered that the suggested pharmacological application would affect (by 10%) the entire population of gap junctions (i.e. normal and ischemic). This is sufficient (in our model) to affect the behavior of the signal propagation within the ischemic region (which is already barely able to propagate the signal) without interfering with the propagation in normal tissue. Of course this change may not be enough in the presence of larger ischemic regions. Taken together these results show that it could be possible to treat non-fatal arrhythmias using relatively minor surgical or pharmacological procedures.
Discussion
The main aim of this paper was to explore the role of gap junction dynamical fluctuations in the occasional onset and termination of the (usually) non-fatal arrhythmias widely observed in the heart. In patients with ischemic cardiomyopathy, these events arise from an abnormal generation and propagation of electrical activity caused by more or less important ischemic episodes [42]. Because signal propagation in the heart occurs via gap-junctions mediating the interaction among neighbor cardiac cells, a deeper understanding of the functional consequences of gap-junction malfunctioning can be an important step to understand cardiac arrhythmias. In general, the dynamic reconfiguration of gap junction conductance can have non-trivial consequences in defining normal and pathological activity of a network of connected cells in the heart and also in the central nervous system [43].
With our model, we have demonstrated how the degradation of gap junctions inside an ischemic area is able to explain practically all kinds of non-fatal arrhythmias experimentally observed in cardiac cells. The average value and variability of the gap junction conductance can be directly related to the type and seriousness of the arrhythmic behavior, whereas their fluctuation with time determines the length of the episode. There is considerable Table 1. Specific sequences of interspike interval (ISI) define normal or abnormal behavior; X normal ISI; 2, shorter than normal; +, longer than normal. [25], [44]. The docking of two connexons leads to the establishment of a homotypic or heterotypic gap junction channel, according to the different possible combinations of connexins that can form a channel [25], [27]. For example, it has been shown that various pathological disorders can be associated with alterations in expression and modulation of connexin proteins [45][46][47]. In particular, in end-stage failing human hearts, Connexin43 expression is decreased, with respect to normal conditions, at both the mRNA and protein levels, due to both ischemic and dilated cardiomyopathy [48], and Connexin 40 plays a role in atrial fibrillation [49][50]. Furthermore, arrhythmogenic remodelling of activation and repolarization in the failing human heart has been associated with changes in the expression of connexins [51]. Finally, the atrial myocardium susceptible to atrial fibrillation can be distinguished from its non-susceptible counterpart by a reduced Connexin 40 expression [47]. A critical property suggested by our model, for the generation of premature beats, is the anisotropy of conductance in the cells forming the entry and exit doors to the ischemic area. Experimental evidence for this rectifying effect has been reported in studying the electrical properties of cells coupled by Connexin 40, Connexin 43 and Connexin 45 [25][26][27]. The post-ischemia formation of entry and exit doors, from an area with slower signal propagation properties, has been suggested by experimental findings [31][32]. The alteration of gap junction properties is also supported by a number of experimental findings and observations. For example, heterotypic gap-junction channels may exhibit rectification with respect to the junction potential [26], especially when the Connexons include Connexin 45 together with Connexin 40 or Connexin 43 [25], [27]. Also, ischemia has been associated with a reduced amount of active Connexin 43 [45], [52][53] and Connexin 40 [47]. Furthermore, the novel drug Rotigaptide has been shown to increase gap junction conductance [54] by increasing Connexin 43 activity [55]. Our model suggests that this drug is effective because, by increasing the fraction of phosphorylated Connexin 43, it contributes to the formation of homotypic gap junctions with symmetrical properties, rather than heterotypic gap junction. This mechanism can thus inhibit the formation of entry/exit doors.
Both gap junction alterations and the mechanism of re-entry have been previously explored with computational models to explain cardiac arrhythmic behavior [2][3], [14]. However, to take into account the initiation of arrhythmias, all these models assume a number of modifications to different model mechanisms. None of them take into account the spontaneous termination of the arrhythmic behavior. Different models for the human atrial fibrillation were reviewed and compared [2] and, in agreement with our study, the results suggest that a reduced conductance of the gap-junctions inside a damaged area may promote non-fatal arrhythmias. Another review [56] also focused on the atrial fibrillation and the role of re-entry, originating from the presence of an obstacle and correlated with the presence of a fibrotic area within the myocardium. In this case, two types of re-entry mechanisms were considered [32]: an inner loop, in which the ischemic region acts as a delay line of the front-wave, and an outer loop where the reentry originates around an obstacle. The outer loop [2], [56] has been shown to be a useful way to explain the initiation (but not the termination) of atrial and ventricular tachycardia or fibrillation. With our model we have shown that all types of non fatal arrhythmias and slow tachycardia can also initiate and terminate by assuming an inner loop as their origin.
The treatment of at least the most severe forms of arrhythmia is usually carried out with surgical procedures that can be quite invasive [39][40]. It is thus important to develop alternatives to reduce or eliminate the occurrence of more or less organized conductance inside the ischemic area; B) Distribution of abnormal events as a function of the variance (top graph) or the average value (bottom graph) of the gap junctions conductance inside the ischemic area. White bars represent iPVB. doi:10.1371/journal.pone.0100288.g005 premature beats. Our model suggests a few experimentally testable predictions on the possible actions that, in principle, can be used: a) relatively minor surgery to close the exit door to the ischemic area (Movie S10), b) pharmacological actions to reduce or eliminate the rectification properties of the gap junctions (Movie S11) and, c) a relatively small (10%) pharmacological reduction of the gap junctions conductance (Movie S12). In all cases, the propagation of the activity inside the ischemic area would be hindered while propagation in the normal tissue remains essentially unaffected.
Supporting Information
Movie S1 Simulation under control conditions. Movie S11 Simulation in the presence of an ischemic area (Is_B) but without anisotropy of the gap junction's conductance of the cells forming the entry/exit doors.
(MP4)
Movie S12 Simulation in the presence of an ischemic area (Is_B), but with all gap junctions conductance reduced by 10%. (MP4) | 6,442.8 | 2014-06-18T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Flexible fabrication of a patterned red phosphor layer on a YAG : Ce 3 + phosphor-in-glass for high-power WLEDs
We proposed a flexible and mask-less approach to directly fabricate a patterned red phosphor layer on a Y3Al5O12:Ce (YAG:Ce) phosphor-in-glass (PiG) for high-power white light-emitting diodes (WLEDs). This approach was realized by ultraviolet (UV) assisted and initiative cooling based water condensing. A low-temperature precursor glass matrix with a high refractive index was synthesized for the YAG:Ce PiG. By controlling the UV pre-curing time, the micro-concaves with adjustable sizes were fabricated on the red phosphor layer embedded with UV-curable polymer. With the pre-curing time of 20 s, 30 s, and 40 s, the average aspect ratio of the micro-concave is 1.03, 0.76, and 0.41, respectively. Consequently, the patterned sample achieves a highest luminous efficacy (LE) of 108.5 lm/W at the current of 350 mA, which is 16.2% higher than the unpatterned sample. The corresponding correlated color temperature (CCT) and color rendering index (CRI) are 4831 K and 80.5, respectively. The results demonstrate that the YAG:Ce PiG coated with the water condensing patterned red phosphor layer is a feasible and effective method to enhance the light extraction and color quality of high-power WLEDs. © 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement OCIS codes: (230.0230) Optical devices; (230.3670) Light-emitting diodes; (160.4670) Optical materials. References and links 1. A. Zukauskas, M. S. Shur, and R. Caska, Introduction to Solid-State Lighting (Wiley, 2002). 2. E. F. Schubert and J. K. Kim, “Solid-state light sources getting smart,” Science 308(5726), 1274–1278 (2005). 3. X. Luo, R. Hu, S. Liu, and K. Wang, “Heat and fluid flow in high-power LED packaging and applications,” Prog. Energ. Combust. 56, 1–32 (2016). 4. S. C. Allen and A. J. Steckl, “A nearly ideal phosphor-converted white light-emitting diode,” Appl. Phys. Lett. 92(14), 143309 (2008). 5. Y. Ma, R. Hu, X. Yu, W. Shu, and X. Luo, “A modified bidirectional thermal resistance model for junction and phosphor temperature estimation in phosphor-converted light-emitting diodes,” Int. J. Heat Mass Transfer 106, 1–6 (2017). 6. J. S. Lee, S. Unithrattil, S. Kim, I. J. Lee, H. Lee, and W. B. Im, “Robust moisture and thermally stable phosphor glass plate for highly unstable sulfide phosphors in high-power white light-emitting diodes,” Opt. Lett. 38(17), 3298–3300 (2013). 7. L.-Y. Chen, J.-K. Chang, W.-C. Cheng, J.-C. Huang, Y.-C. Huang, and W.-H. Cheng, “Chromaticity tailorable glass-based phosphor-converted white light-emitting diodes with high color rendering index,” Opt. Express 23(15), A1024–A1029 (2015). 8. M. Ma, F. W. Mont, X. Yan, J. Cho, E. F. Schubert, G. B. Kim, and C. Sone, “Effects of the refractive index of the encapsulant on the light-extraction efficiency of light-emitting diodes,” Opt. Express 19(S5), A1135–A1140 (2011). Vol. 8, No. 3 | 1 Mar 2018 | OPTICAL MATERIALS EXPRESS 605 #319022 https://doi.org/10.1364/OME.8.000605 Journal © 2018 Received 3 Jan 2018; revised 5 Feb 2018; accepted 6 Feb 2018; published 12 Feb 2018 9. X. Xu, H. Li, Y. Zhuo, R. Li, P. Tian, D. Xiong, and M. Chen, “High refractive index coating of phosphor-inglass for enhanced light extraction efficiency of white LEDs,” J. Mater. Sci. 53(2), 1335–1345 (2018). 10. Y. K. Lee, J. S. Lee, J. Heo, W. B. Im, and W. J. Chung, “Phosphor in glasses with Pb-free silicate glass powders as robust color-converting materials for white LED applications,” Opt. Lett. 37(15), 3276–3278 (2012). 11. R. Zhang, H. Lin, Y. Yu, D. Chen, J. Xu, and Y. Wang, “A new-generation color converter for high-power white LED: transparent Ce:YAG phosphor-in-glass,” Laser Photonics Rev. 8(1), 158–164 (2014). 12. Y. Peng, S. Wang, R. Li, H. Li, H. Cheng, M. Chen, and S. Liu, “Luminous efficacy enhancement of ultravioletexcited white light-emitting diodes through multilayered phosphor-in-glass,” Appl. Opt. 55(18), 4933–4938 (2016). 13. Y. Peng, R. Li, H. Cheng, Z. Chen, H. Li, and M. Chen, “Facile preparation of patterned phosphor-in-glass with excellent luminous properties through screen-printing for high-power white light-emitting diodes,” J. Alloys Compd. 693, 279–284 (2017). 14. X. Zhang, J. Yu, J. Wang, B. Lei, Y. Liu, Y. Cho, R.-J. Xie, H.-W. Zhang, Y. Li, Z. Tian, Y. Li, and Q. Su, “Allinorganic light convertor based on phosphor-in-glass engineering for next-generation modular high-brightness white LEDs/LDs,” ACS Photonics 4(4), 986–995 (2017). 15. Y. H. Kim, N. S. M. Viswanath, S. Unithrattil, H. J. Kim, and W. B. Im, “Review—Phosphor plates for highpower LED applications: challenges and opportunities toward perfect lighting,” ECS J. Solid State Sci. Technol. 7(1), R3134–R3147 (2018). 16. L. Yang, M. Chen, Z. Lv, S. Wang, X. Liu, and S. Liu, “Preparation of a YAG:Ce phosphor glass by screenprinting technology and its application in LED packaging,” Opt. Lett. 38(13), 2240–2243 (2013). 17. Y. Peng, R. Li, S. Wang, Z. Chen, L. Nie, and M. Chen, “Luminous properties and thermal reliability of screenprinted phosphor-in-glass-based white light-emitting diodes,” IEEE T. Electron Dev. 64(3), 1114–1119 (2017). 18. Y. K. Lee, Y. H. Kim, J. Heo, W. B. Im, and W. J. Chung, “Control of chromaticity by phosphor in glasses with low temperature sintered silicate glasses for LED applications,” Opt. Lett. 39(14), 4084–4087 (2014). 19. L.-Y. Chen, W.-C. Cheng, C.-C. Tsai, J.-K. Chang, Y.-C. Huang, J.-C. Huang, and W.-H. Cheng, “Novel broadband glass phosphors for high CRI WLEDs,” Opt. Express 22(S3 Suppl 3), A671–A678 (2014). 20. Y. Peng, R. Li, X. Guo, H. Zheng, and M. Chen, “Optical performance improvement of phosphor-in-glass based white light-emitting diodes through optimized packaging structure,” Appl. Opt. 55(29), 8189–8195 (2016). 21. Z. Lin, H. Lin, J. Xu, F. Huang, H. Chen, B. Wang, and Y. Wang, “Highly thermal-stable warm w-LED based on Ce:YAG PiG stacked with a red phosphor layer,” J. Alloys Compd. 649, 661–665 (2015). 22. Y. H. Kim, P. Arunkumar, B. Y. Kim, S. Unithrattil, E. Kim, S.-H. Moon, J. Y. Hyun, K. H. Kim, D. Lee, J.-S. Lee, and W. B. Im, “A zero-thermal-quenching phosphor,” Nat. Mater. 16(5), 543–550 (2017). 23. J. S. Kim, S. K. Eswaran, O. H. Kwon, S. J. Han, J. H. Lee, and Y. S. Cho, “White-light-emitting diodes: enhanced luminescence characteristics of remote yellow silicate phosphors printed on nanoscale surfaceroughened glass substrates for white light-emitting diodes,” Adv. Opt. Mater. 4(7), 976 (2016). 24. Y. Peng, X. Guo, R. Li, H. Cheng, and M. Chen, “Thermally stable WLEDs with excellent luminous properties by screen-printing a patterned phosphor glass layer on a microstructured glass plate,” Appl. Opt. 56(12), 3270– 3276 (2017). 25. Y.-K. Ee, P. Kumnorkaew, R. A. Arif, H. Tong, J. F. Gilchrist, and N. Tansu, “Light extraction efficiency enhancement of InGaN quantum wells light-emitting diodes with polydimethylsiloxane concave microstructures,” Opt. Express 17(16), 13747–13757 (2009). 26. S. Wang, Y. Li, L. Feng, L. Zhang, Y. Zhang, X. Su, W. Ding, and F. Yun, “Laser patterning of Y3Al5O12:Ce ceramic phosphor platelets for enhanced forward light extraction and angular color uniformity of white LEDs,” Opt. Express 24(15), 17522–17531 (2016). 27. R. Liang, F. Wu, S. Wang, Q. Chen, J. Dai, and C. Chen, “Enhanced optical and thermal performance of eutectic flip-chip ultraviolet light-emitting diodes via AlN-doped-silicone encapsulant,” IEEE T. Electron Dev. 64(2), 467–471 (2017). 28. H.-T. Hsieh and G.-D. J. Su, “A novel boundary-confined method for high numerical aperture microlens array fabrication,” J. Micromech. Microeng. 20(3), 035023 (2010). 29. H. Jung and K.-H. Jeong, “Monolithic polymer microlens arrays with high numerical aperture and high packing density,” ACS Appl. Mater. Interfaces 7(4), 2160–2165 (2015). 30. J. Yong, F. Chen, Q. Yang, G. Du, H. Bian, D. Zhang, J. Si, F. Yun, and X. Hou, “Rapid fabrication of largearea concave microlens arrays on PDMS by a femtosecond laser,” ACS Appl. Mater. Interfaces 5(19), 9382– 9385 (2013). 31. B. Bao, J. Jiang, F. Li, P. Zhang, S. Chen, Q. Yang, S. Wang, B. Su, L. Jiang, and Y. Song, “Fabrication of patterned concave microstructures by inkjet imprinting,” Adv. Funct. Mater. 25(22), 3286–3294 (2015). 32. M. Kuang, L. Wang, and Y. Song, “Controllable printing droplets for high-resolution patterns,” Adv. Mater. 26(40), 6950–6958 (2014). 33. Y. Peng, X. Guo, R. Liang, Y. Mou, H. Cheng, M. Chen, and S. Liu, “Fabrication of microlens arrays with controlled curvature by micromolding water condensing based porous films for deep ultraviolet LEDs,” ACS Photonics 4(10), 2479–2485 (2017). 34. X.-H. Li, R. Song, Y.-K. Ee, P. Kumnorkaew, J. F. Gilchrist, and N. Tansu, “Light extraction efficiency and radiation patterns of III-nitride light-emitting diodes with colloidal microlens arrays with various aspect ratios,” IEEE Photonics J. 3(3), 489–499 (2011). Vol. 8, No. 3 | 1 Mar 2018 | OPTICAL MATERIALS EXPRESS 606
Introduction
White light-emitting diodes (WLEDs) have been widely applied in display backlighting, street lamps, headlamps, and indoor illuminations due to their compact size, energy saving, environmental friendly, and long lifetime [1][2][3].Currently, the commercial phosphorconverted WLEDs can be produced by the combination of blue LED chips and yellow Y 3 Al 5 O 12 :Ce 3+ (YAG:Ce 3+ ) phosphor embedded with organic resins [4,5].However, for highpower WLEDs, the organic encapsulants with poor thermal and photonic stability may age easily and turn yellow during long-time service, resulting in the reduction of luminous efficacy (LE), the shift of chromaticity, and the degradation of long-term reliability [6,7].In addition, the refractive index of organic encapsulants (~1.5) is lower than that of the YAG:Ce 3+ phosphor (~1.83), which reduces the light escaped from phosphor particles due to the reflection loss [8,9].
To overcome the weaknesses of organic encapsulants, phosphor-in-glass (PiG), which is prepared by a low-temperature (<800°C) sintering of glass powders and phosphor particles, has been employed as a promising luminescent converter for high-power WLEDs because of its robust, high thermal stability, and low thermal expansion coefficient [10][11][12][13][14][15].Furthermore, the precursor glass matrix can achieve a high refractive index for the PiG converter by adding ions of large polarizability.In the last year, we have innovatively developed YAG:Ce 3+ based PiG converter by introducing yellow YAG:Ce 3+ phosphor embedded with borosilicate glass through screen-printing and low-temperature sintering, which yields cool white light with a LE of 114 lm/W, a correlated color temperature (CCT) of 5524 K, and a color rendering index (CRI) of 69 [16,17].Unfortunately, this YAG:Ce 3+ PiG is notoriously difficult to achieve natural white light with high CRI due to the deficiency of red spectrum in the emission light.In order to increase the CRI, multi-components PiG is introduced by adding red CaAlSiN 3 :Eu 2+ (CASN:Eu 2+ ) phosphor into the glass matrix [18][19][20].It should be noted that the red phosphor prone to thermal degradation during co-sintering and has an interfacial reaction with the glass matrix, both of which reduce the quantum efficiency of red phosphor [21,22].In addition, the PiG converter with flat surface has total internal reflection (TIR) at the glass-air interface because of the different refractive index between glass and air, which leads to the trapped loss of light in the remote-type LED package [23,24].It is well known that micro/nano patterned structures can reduce the TIR loss and enhance light extraction [25][26][27].Currently, various strategies have been proposed to fabricate the patterned structures, including lithography [28], photoresist reflow [29], direct laser writing [30], and inkjet imprinting [31].However, most strategies involve at least one photolithographic step, which suffers from some disadvantages of high-cost, time-consuming, and heavy pollution.Although direct laser writing can fabricate the patterned structure with high precision, it is expensive and inefficient for large-area fabrication.The appeal of inkjet printing lies in it being a mask-less, large-scale, and low-cost manufacturing.But this method is limited by inhomogeneous morphology and large feature size [32].Therefore, it still is a challenge to fabricate the YAG:Ce 3+ PiG with high light extraction and color quality by using a flexible and effective method.
Herein, a patterned red phosphor layer was fabricated on a YAG:Ce 3+ PiG by using a flexible and mask-less water condensing approach, which not only avoids the thermal and chemical degradation of red phosphor during co-sintering but also enhances light extraction from LED package.A low-temperature glass matrix with a matching refractive index of YAG:Ce 3+ phosphor was synthesized.The YAG:Ce 3+ PiG was prepared by screen-printing and co-sintering.The micro-concave array was fabricated on the red phosphor layer through ultraviolet (UV) assisted and initiative cooling based water condensing.The aspect ratio of micro-concave was adjusted by controlling UV pre-curing time.The WLED modules were packaged by the YAG:Ce 3+ PiGs with the patterned red phosphor layers and their optical performances were investigated.
Experimental details
Figure 1 shows the fabrication process of patterned red phosphor layer on YAG:Ce 3+ PiG.The YAG:Ce 3+ PiG was prepared by screen-printing and co-sintering the commercial yellow YAG:Ce 3+ phosphor and a low-melting glass matrix.The precursor glass matrix with the stoichiometric composition of 25B 2 O 3 -10SiO 2 -35ZnO-6Li 2 O-12La 2 O 3 -12WO 3 (in mol%) was synthesized by a typical melting quenching process.The reaction mixture was melt in an alumina crucible at 1250°C for 1 h and then poured in a pre-heated steel plate, subsequently annealed at 400°C for 1 h to relieve the stress, and finally cooled to room temperature.The obtained glass bulk was milled to glass powders via a ball grinder.Then, the glass paste with 40 wt% YAG:Ce 3+ phosphor was printed on a glass plate with a thickness of 0.7 mm and then sintered at 600°C for 30 min.After that, the commercial red CASN:Eu 2+ phosphor embedded with a UV-curable polymer (NOA61, Norland, USA) was spin-coated on the prepared PiG at 2000 rpm for 20 s.The viscosity of UV-curable polymer is 300 cps.The red phosphor was uniformly dispersed in the polymer by using a planetary mixer under ultrasonic agitation.The relative amount of the red phosphor to the UV-curable polymer was set as 10 wt%.The red phosphor coated YAG:Ce 3+ PiG achieves different luminescence properties by controlling the number of spin-coating layer.The red phosphor layer with micro-concave patterns was fabricated by the UV assisted and initiative cooling based water condensing, as detailed in our previous work [33].In order to achieve the micro-concave with various sizes, the viscoelastic state of red phosphor layer was adjusted by UV pre-exposure under 385 nm UV-LED with a power density of 1.5 W/cm 2 for different time of 20 s, 30 s, and 40 s, respectively.The distance between the red layer and the UV-LED is 50 mm.Then, a directly initiative cooling method by using a thermo electric cooler (TEC) with an input current of 0.6 A was applied to realize the cool surface of red layer for 2.5 min.In addition, the temperature and relative humidity (RH) of ambient were controlled at 15°C and 75%, respectively, by using a constant temperature and humidity box.As the temperature of red layer surface sufficiently lower than that of ambient, small and disordered water droplets were spontaneously condensed on the red layer surface owing to water vapor nucleating, and then grown and self-assembled into closely packed array.After the solidifying of red phosphor layer and the droplets evaporation under 385 nm UV-LED with a power density of 5 W/cm 2 for 2 min, the micro-concave array was formed on the red phosphor layer by treating water droplets array as templates.Finally, the micro-concave patterns with different aspect ratios were fabricated by controlling the UV pre-curing time.
For the packaging of high-power WLEDs, four blue LED chips (2 × 2) with a peak wavelength of 454 nm were attached and wire bonded at the bottom of substrate.The fabricated color converters were bonded on the top of substrate and the patterned red phosphor layer faced outward.The gap between the chips and the converters was filled with silicone encapsulant, which can reduce the reflection loss at the chip surface and enhance light extraction.The glass transition temperature (T g ) and the crystallization temperature (T c ) of the prepared glass matrix were analyzed by simultaneous thermal analysis (STA449F3, Netzsch, Germany).The refractive index of the prepared glass matrix was measured by spectroscopic ellipsometry (M-2000V, J. A. Woollam, USA).The photoluminescence excitation (PLE) and PL emission spectra of the YAG:Ce 3+ PiG and the red phosphor coated PiGs were recorded on a fluorescence spectrophotometer (FP-6500, Jasco, Japan).The YAG:Ce 3+ PiG and the micro-concave patterns were characterized by using a scanning electron microscope (SEM, Nova NanoSEM 450, FEI) equipped with an energy dispersive spectrometer (EDS).The electroluminescence (EL) spectra, LE, CCT, CRI, and chromaticity coordinate of WLED modules were measured in an integrating sphere (HAAS-2000, Everfine, China) with a diameter of 50 cm.The far-field emission intensity of WLED modules was measured by a light intensity distribution (LID) curve tester (GO1900L, Everfine, China).
Results and discussion
Figure 2(a) presents the differential scanning calorimetry (DSC) thermal analysis of the prepared glass matrix.In the DSC curve, the glass transition temperature (T g ) can be found at 520°C and the crystallization temperature (T c ) is estimated to be 635°C.This result indicates that the prepared glass matrix achieves low melting temperature without crystallization in the temperature interval of 520-635°C.The refractive index of prepared glass matrix was measured, as shown in Fig. 2(b).It is shown that the glass matrix yields a similar refractive index (~1.82at 460 nm) with the YAG:Ce 3+ phosphor, which can reduce the reflection loss at the phosphor particles.The PLE and PL spectra of the prepared YAG:Ce 3+ PiG is shown in Fig. 3(a).The PLE spectrum of the YAG:Ce 3+ PiG displays two excitation bands centered at 342 nm and 463 nm originating from the 4f→5d transition of Ce 3+ .The PL spectrum of the YAG:Ce 3+ PiG exhibits a typical Ce 3+ :5d→4f broadband emission centered at 541 nm under 463 nm excitation.Figure 3(b) presents the PL emission spectra of the YAG:Ce 3+ PiG coated with the different number of red phosphor layers.As the increase of red layer, the yellow emission intensity decreases while the red emission intensity increases, and the peak wavelength changes from 541 nm to 583 nm, which can improve the light quality.4(c).The thickness of YAG:Ce 3+ glass layer and red phosphor layer is 40 μm and 35 μm, respectively.Obviously, the micro-concave array is formed on the red phosphor layer.It is difficult to discover some agglomerates and any interfacial phase between the YAG:Ce 3+ phosphors and the glass matrix, and the Y, Al, and O signals were detected on the YAG:Ce 3+ phosphor.This result demonstrates that the YAG:Ce 3+ microcrystals are well embedded in the glass matrix to form the YAG:Ce 3+ PiG.The micro-concave arrays were fabricated on the red phosphor layers by the water condensing method.The YAG:Ce 3+ PiG coated with the 3-red phosphor layer were used for the fabrication of the micro-concave arrays.Figure 5(a)-(c) show the top and cross-sectional SEM images of the fabricated micro-concave arrays at the different UV pre-curing time.With the UV pre-curing time of 20 s, the relatively uniform concave array with the average width of 1.47 μm and the average depth of 1.51 μm is achieved on the surface of red phosphor layer.As the UV pre-curing time increases, the average width increases and the average depth decreases, and the distribution of micro-concave array becomes nonuniform at the UV precuring time of 40 s.With the UV pre-curing time of 20 s, 30 s, and 40 s, the average aspect ratio of micro-concave is 1.03, 0.76, and 0.41, respectively.It is attributed to the fact that the red layer surface changes from a viscous liquid to a viscoelastic state when the UV pre-curing time increases, which reduces the penetrating depth of water droplets on the surface and increases the probability of water droplets coalesce.Thus, a conclusion can be drawn that the micro-concave arrays with different aspect ratios can be easily fabricated on the red phosphor layer by using the time-saving and low-cost water condensing method.The high-power WLED modules were packaged by the different converter samples and their optical performances at the driving current of 350 mA is shown in Table 1.The red phosphor layers with different micro-concave arrays fabricated at the UV pre-curing time of 20 s, 30 s, and 40 s are treated as patterned sample 1, 2, and 3, respectively.Compared with the YAG:Ce 3+ PiG sample, the unpatterned sample with the red phosphor layer coated on the YAG:Ce 3+ PiG achieves the increased CRI of 77.2 while the LE and CCT of unpatterned sample decrease owing to the increased red emission intensity, as shown in Fig. 6(a).Furthermore, the three patterned samples achieve higher LE and lower CCT than the unpatterned sample because of the increased yellow and red emission.The LE of these three patterned samples is increased of 11.7%, 16.2%, and 10.0%, respectively, compared with the unpatterned sample, which is attributed to increase in photon escape cone and reduction in TIR loss.The patterned sample 2 exhibits a LE of 108.5 lm/W, a CCT of 4831 K, and a CRI of 80.5.The patterned sample 3 has a small decrease at the LE and CRI due to the relatively nonuniform micro-concave and the low aspect ratio of 0.41, which reduces the scattering effect and affects light intensity extracted from the sample [34], as shown in Fig. 6(a).Obviously, the all patterned samples yield the CRI higher than 78.The LE of WLED modules with different converter samples at various driving currents was measured, as shown in Fig. 7(a).The LE of the all samples decreases with the increase of driving current, and the patterned samples achieve higher LE than the unpatterned sample in the entire current region.In addition, the far-filed emission intensities of unpatterned and patterned samples packaged WLED modules were investigated.The normal direction and large angle far-field emission of the patterned samples is higher than that of the unpatterned sample, which is attributed to the reduced TIR and the increased light extraction in the large angular direction.The patterned sample 2 achieves the highest far-field emission at large angular distribution, which corresponded to the light extraction enhancement.
Fig. 3 .
Fig. 3. (a) Normalized PLE and PL spectra of prepared YAG:Ce 3+ PiG.(b) PL emission spectra of prepared YAG:Ce 3+ PiG coated with different number of red phosphor layers.
Figure 4 (
Figure4(a) and (b) show the photographs of the unpatterned and patterned samples with different red phosphor layers coated on YAG:Ce 3+ PiG, respectively.With the increasing number of layers of red phosphor layer, the transparency of samples is reduced because a larger fraction of the emitted light is scattered.The patterned samples exhibit less transparency compared with the unpatterned samples due to the scattering properties of micro-concave structure.The cross-sectional image of the patterned sample with 3-red layer is shown in Fig.4(c).The thickness of YAG:Ce 3+ glass layer and red phosphor layer is 40 μm and 35 μm, respectively.Obviously, the micro-concave array is formed on the red phosphor layer.It is difficult to discover some agglomerates and any interfacial phase between the YAG:Ce 3+ phosphors and the glass matrix, and the Y, Al, and O signals were detected on the YAG:Ce 3+ phosphor.This result demonstrates that the YAG:Ce 3+ microcrystals are well embedded in the glass matrix to form the YAG:Ce 3+ PiG.
Fig. 4 .
Fig. 4. Photographs of (a) unpatterned samples and (b) patterned samples with different red phosphor layers.(c) Cross-sectional SEM image of patterned sample and EDS curve of YAG:Ce 3+ glass layer.
Fig. 5 .
Fig. 5. Top and cross-sectional SEM images of the fabricated micro-concave arrays at the UV pre-curing time of (a) 20 s, (b) 30 s, and (c) 40 s.(d) Average width and depth of the fabricated micro-concave at different UV pre-curing time.
Figure 6 (
b) presents the changes of the CIE chromaticity coordinates of the WLED modules with the different converter samples.The CIE chromaticity coordinates appear in the nearly natural white light region close to the blackbody radiation curve.
Fig. 6 .
Fig. 6.(a) EL spectra and (b) CIE chromaticity coordinate distributions of WLED modules with different converter samples at the driving current of 350 mA.The inset shows the corresponding WLEDs in operation.
Fig. 7 .
Fig. 7. (a) LE of WLED modules with different samples as a function of current.(b) Far-filed emission intensities of unpatterned and patterned samples packaged WLED modules. | 5,545.4 | 2018-03-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
An intelligent platform for ultrasound diagnosis of thyroid nodules
This paper proposed a non-segmentation radiological method for classification of benign and malignant thyroid tumors using B mode ultrasound data. This method aimed to combine the advantages of morphological information provided by ultrasound and convolutional neural networks in automatic feature extraction and accurate classification. Compared with the traditional feature extraction method, this method directly extracted features from the data set without the need for segmentation and manual operations. 861 benign nodule images and 740 malignant nodule images were collected for training data. A deep convolution neural network VGG-16 was constructed to analyze test data including 100 malignant nodule images and 109 benign nodule images. A nine fold cross validation was performed for training and testing of the classifier. The results showed that the method had an accuracy of 86.12%, a sensitivity of 87%, and a specificity of 85.32%. This computer-aided method demonstrated comparable diagnostic performance with the result reported by an experienced radiologist based on American college of radiology thyroid imaging reporting and data system (ACR TI-RADS) (accuracy: 87.56%, sensitivity: 92%, and specificity: 83.49%). The automation advantage of this method suggested application potential in computer-aided diagnosis of thyroid cancer.
Thyroid cancer is the most common endocrine cancer, and its incidence has increased rapidly worldwide, especially in Asian countries 1,2 . Most of thyroid cancers show as thyroid nodules, which are usually detected by chance in the neck examination with ultrasonography because of other disorders 3,4 . When using high-resolution ultrasound, the prevalence of thyroid nodules is as high as 19-68% in a randomly selected population. Since most of the nodules are benign and the percentage of malignant ones is relatively low (7-15%), it is of great importance to distinguish benign and malignant thyroid nodules 1,5,6 . When doctors notice the presence of nodules, they will do a systematical assessment of the thyroid gland. It includes a set of bioanalysis of thyroid from blood tests, such as thyroxine (T4), triiodothyronine (T3), etc. But this usually cannot predict whether it is a benign or malignant nodule 7,8 . With the development of high-frequency ultrasound technology, systematic ultrasound examination of the neck can be carried out to identify the nature of the nodules 9 . This examination allows doctors to measure the number, size, and shape of nodules and to detect other possible abnormalities. Thyroid ultrasound provides information about the structure and characteristics of the nodules, which is helpful in the diagnosis of various types of thyroid nodules, including composition, echo, shape, margin and echoic foci 10,11 . Yet, this technique is also based on subjective assessments, ultrasound-guided fine needle aspiration (FNA) is recommended for the differential diagnosis of thyroid benign and malignant nodules 1 .
Deep convolution neural network (DCNN) 12-14 is a kind of artificial intelligence method, which has been applied in more and more research fields [15][16][17] . It has new applications in dermatology 18 , ophthalmology 19 , radiology 20,21 and other fields [22][23][24] . In recent years, the research of DCNN in the field of radiology shows that the performance of this algorithm is equivalent to that of radiologist. With the continuous development of this field, the possible types and quantities of deep learning are also increasing 25 . Compared with the traditional feature extraction method, DCNN method directly extracts features from the data set without the need for segmentation and complex manual operations 26 The objective of the current work was to design a computer-aided system based on DCNN to automatically classify the benign and malignant nodules based on thyroid ultrasound images. With B mode ultrasound data, this method aimed to combine the advantages of morphological information provided by ultrasound and convolutional neural networks in automatic feature extraction and accurate classification. The validity of this method was verified by comparing the current experimental results with results obtained by an experienced radiologist based on American college of radiology thyroid imaging reporting and data system (ACR TI-RADS) [28][29][30] .
Results
A total of 1,810 images from 1,452 subjects were obtained, of which 840 were malignant and 970 were benign. Detailed information of the collected cases are shown in Table 1. The total number of malignant nodule images is 840 (1,810, 46.4%), which includes 740 (1601, 46.22%) malignant nodule images in training group and 100 (209, 47.8%) malignant nodule images in testing group. The percentage of malignant and benign nodules between the training group and the test group is not statistically significant ( Table 2).
The experimental steps are illustrated in Fig. 1. In the current work, the proposed DCNN model was used to analyze the thyroid ultrasound images. FNA and surgical results were taken as the reference.
For causing no additional workload for the radiologist, we used a bound box of a nodule by enclosing calipers (used in clinics for nodule measurement), and we did not need to draw the boundary of the nodule by a radiologist. The deep convolution neural network VGG-16 31 for large-scale target recognition was evaluated, and the nodule recognition based on ultrasound image was fine tuned.
We performed validation of the performance of the classifier using a nine folded cross validation. As shown in Table 3, for the training set of differentiating benign and malignant nodules, the areas under the receiver operating characteristic curves (AUC) of the algorithm is 0.9054 (95% confidence interval (CI) 0.8773, 0.9336). The accuracy is 86.27% (95% CI 84.11%, 88.43%), the sensitivity was 87%, and the specificity is 86.42% (95% CI 83.10% 89.74%).
As shown in Fig. 2, for the test set of differentiating benign and malignant nodules, the AUC of our proposed method is 0.9157, the accuracy is 86.12%, the sensitivity is 87%, and the specificity is 85.32%. The AUC of the experienced radiologist diagnosis is 0.8879. The cut off value of TI-RADS is 4 reported by the radiologist, which is corresponding to the top-left point on receiver operating characteristic curve. The accuracy, sensitivity and specificity reported by the radiologist according to ACR TI-RADS are 87.56%, 92% and 83.49% respectively. Statistical analysis 32,33 shows that there is no significant difference between our algorithm and the result reported by the experienced radiologist (p > 0.1).
Discussion
Ultrasound diagnosis of thyroid nodules is time-consuming and labor-intensive, and has interreader variability.
In this research, we developed a deep learning algorithm to provide management recommendations for thyroid nodules, based on ultrasound image observations, and compared the results with those obtained by radiologist who follows the guidelines of ACR TI-RADS. With the thyroid nodule classification system proposed in this paper based on deep neural networks, experimental results on ultrasound images indicated that this method would achieve comparable classification performance to the result reported by the experienced radiologist. In the present work, we applied deep neural network to the dataset (1601/209 for training/testing). 1601 training data included 861 benign nodule images and 740 malignant nodule images. The 209 data included 109 benign nodule images and 100 malignant nodule images. The experimental results showed that the accuracy, the sensitivity and the specificity of this method achieved 86.12%, 87% and 85.32%, respectively. Our findings supported increasing evidence that deep learning could be applied to the thyroid clinical diagnosis. After training, through a similarity activation map analysis, the DCNN model could be used to pinpoint malignant thyroid nodules. DCNN models, together with machine learning methods based on traditional feature extraction were used to identify the malignancy of thyroid nodules with ultrasound images. Ma et al. 24 used DCNN to analyze 8,148 hand-labeled thyroid nodules and obtained 83.0% (95% CI 82.3-83.7) thyroid nodule diagnostic accuracy. This experiment required big data set for training. Xia et al. 34 obtained the accuracy of distinguishing benign and malignant nodules by 87.7%, with extreme learning machine and radiology features collected from 203 ultrasound images of 187 thyroid patients. This method had a relatively lower specificity and need to draw nodule boundary by radiologist, which brought a lot of work to doctors. Pereira et al. reported that the accuracy of the DCNN model in distinguishing 946 malignant and benign thyroid nodules from 165 patients was 83% 35 . Chi et al. 23 used the imaging features extracted by deep convolutional neural network and performed binary tasks for classifying TI-RADS category 1 and category 2 from the other categories, and reached more than 99% accuracy. Although the performance seemed to be excellent, it was a greatly simplified task of predicting category 1 and category 2. Also, the research subjects did not have FNA and surgery results to be compared with.
In the present study, all the patients with thyroid cancers in training and test data sets had FNA or surgery results. Furthermore, to avoid additional workload for the radiologist, we did not need to draw the boundary of the nodule by a radiologist. In another study of Mateusz Buda et al. 2019, 1,377 thyroid nodules were used in 1,230 patients with complete imaging data and clear cytological or histological diagnosis 36 . For 99 test nodules, the proposed deep learning algorithm achieved a sensitivity of 13/15 (87%: 95% confidence interval: 67%, 100%), which was the same as the expert and higher than 5 of 9 radiologists. The specificity of the deep learning algorithm achieved 44/84 (52%; 95% CI 42%, 62%)), which was similar to the consensus of experts (43/84; 51%; 95% CI 41%, 62%; p = 0.91), higher than the other 9 radiologists. The average sensitivity and specificity of the 9 radiologists were 83% (95% CI 64%, 98%) and 48% (95% CI 37%, 59%). Our experiment had a comparative sensitivity and higher specificity.
In summary, the proposed DCNN diagnosis algorithm could be used to effectively classify benign and malignant thyroid nodules, and exhibited comparable diagnostic performance to the results reported by the experienced radiologist according to TI-RADS. This method might enable potential applications in computer-aided diagnosis of thyroid cancer. However, the present study still had some limitations. For instance, we did not find the accuracy of the computer-aided platform proposed in the work had connection with tumor size and cancer subtypes. The number of cases enrolled in the current study were small. More types of patients should be validated, and the accuracy of the proposed model should be further verified and improved.
Methods
Research cohort. This retrospective study was approved by the institutional review board of the First Affiliated Hospital of Nanjing Medical University and informed consent was obtained from all patients. All study methodologies were carried out in accordance with relevant guidelines and regulations. From January 2018 to September 2019, a group of patients with thyroid nodules who took ultrasound examination before surgery or biopsy were included in the retrospective study. The inclusion criteria were determined as follows: (a) age > 18 years; (b) not received hormone therapy, chemotherapy, or radiation therapy; (c) thyroid nodule diameter > 5 mm. Images without diagnostic, indeterminate cytologic or histological results were excluded. The diagnosis of a malignant nodule was made when malignancy was confirmed on surgical specimen by core-needle biopsy (CNB) or FNA cytology. A benign nodule was made when any one of the following criteria was met: (a) confirmation using a surgical specimen; (b) benign FNA cytology findings; or (c) US findings of very low suspicion 9 ; and (d) Cystic or almost complete cystic nodules and spongy nodules (mainly composed of more than 50% of the small cystic space).
The database of 1,810 thyroid disease images was evaluated by two experts. B-ultrasonic examination was carried out by several commercial US equipment: (1) Esaote MyLab twice (Genova, Italy). www.nature.com/scientificreports/ pathological reference. It is known that ultrasound-guided FNA has high specificity and sensitivity in the diagnosis of thyroid benign and malignant nodules, so that it can be used as a reference for the differential diagnosis of thyroid benign and malignant nodules. Therefore, in the present work, FNA was taken as the reference after B-mode ultrasound diagnosis. The final pathological diagnosis of a benign or malignant thyroid nodule was classified if a thyroid nodule had a benign or malignant cytology (or histology, if available). Under the guidance of ultrasound of interventional radiologist, 25G needle was used. After the location of the nodule was determined under the guidance of ultrasound, several samples in the nodule were obtained by using the needle in the ultrasound scanning plane. Three or four biopsies were fixed in BD CytoRich Red Preservative fluid (Becton, Dickinson and company, Mebane, USA), and then sedimentation-based cytologic examination was taken. All slides were reviewed and explained by three experienced cytotechnologists who reported thyroid cytopathology with reference to the Bethesda system. If the nodule had undergone core needle biopsy or surgical resection, the histologic results should be used instead of cytological examination. Figure 3a shows the B mode ultrasound image of benign thyroid nodule. Figure 3b shows the B mode ultrasound image of malignant thyroid nodule. Figure 3c shows the FNA smear micrograph of benign nodule. Figure 3d shows the FNA smear micrograph of malignant nodule. Cytologic images were obtained based on the Pap staining.
Algorithm. In this paper, deep convolution neural network VGG-16 was fine-tuned and evaluated based on ultrasound image for the thyroid nodule diagnosis. Convolutional neural network included five convolution and pooling operation modules for extracting complex features from each input image. These features were flattened into a single vector. The output of the model was a collection of continuous variables that represented the predicted probabilities for each category (range 0.0-1.0) and were treated as discrete probability distributions. The final classification was calculated as a probability-weighted classes. The input of this network was subjected to a stack of convolutions and 3 × 3 filter was pushed to a depth of 16 to 19 weighted layers. A bunch of convolutions were followed by three fully connected layers (viz., 16 layers with learnable weights, 13 convolutions and 3 fully connected layers). These networks were fine-tuned using training sets containing benign and malignant samples to identify nodules. This was done by extracting all layers except the last fully connected layer from the pre-trained network and adding new fully connected layers and softmax. The obtained supervised training performed nodule classification tasks with expected results and reduced computational complexity. Figure 4 is the Network Structure of the intelligent platform.
Statistical analyses.
We performed a t-test of the hypothesis that the data in the vector X came from a distribution with mean zero, and returned the result of the test in H. H = 0 indicated that the null hypothesis could not be rejected at the 5% significance level. H = 1 indicated that the null hypothesis should be rejected at the 5% level. The data were assumed to come from a normal distribution with unknown variance. | 3,407.6 | 2020-08-06T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
The Application of a Convolutional Neural Network for the Detection of Contrails in Satellite Imagery
: This study presents a novel approach for the detection of contrails in satellite imagery using a convolutional neural network (CNN). Contrails are important to monitor because their contribution to climate change is uncertain and complex. Contrails are found to have a net warming effect because the clouds prevent terrestrial (longwave) radiation from escaping the atmosphere. Globally, this warming effect is greater than the cooling effect the clouds have in the reduction of solar (shortwave) radiation reaching the surface during the daytime. The detection of contrails in satellite imagery is challenging due to their similarity to natural clouds. In this study, a certain type of CNN, U-Net, is used to perform image segmentation in satellite imagery to detect contrails. U-Net can accurately detect contrails with an overall probability of detection of 0.51, a false alarm ratio of 0.46 and a F1 score of 0.52. These results demonstrate the effectiveness of using a U-Net for the detection of contrails in satellite imagery and could be applied to large-scale monitoring of contrail formation to measure their impact on climate change.
Introduction
Contrails, also known as aviation-induced cirrus (AIC) are ice clouds that form when atmospheric conditions are near saturation and aircraft emissions inject enough water vapor to form clouds that would otherwise not have existed. The resulting white streaks can persist in the atmosphere for just a few minutes or for hours. The theory for forecasting conditions distinguishing temporary from persistent contrails is straightforward as it only involves knowing pressure, temperature, humidity, and engine efficiency. In practice, while pressure, temperature, and engine efficiency are relatively easy to obtain with sufficient accuracy, humidity in the upper troposphere is not [1,2]. It is important that science improves in observing and forecasting the existence and location (including altitude) of persistent contrails, as the aviation industry is considering procedures to mitigate the impact with changes in flight levels. If the science is incorrect, this could cause the unnecessary use of fuel and therefore extra CO 2 emissions. Moreover, if we do not have sufficient information, operational procedures will not be able to reroute aircraft to avoid regions that have conditions that are favorable for the formation of persistent contrails that cause the largest warming impact. It has been suggested that 5% of anthropogenic radiative forcing can be attributed to aviation [3]. Of all of the emissions from aviation, contrails contribute between 36-81% of radiative forcing [4], and the largest uncertainty in terms of radiative forcing is associated with contrails [5]. It is the persistent contrails that have an impact on the planetary albedo and the earth's radiation budget. Studies have found that just 12% of flights contribute on the order of 80% of energy forcing [3]. Therefore, diverting aircraft away from where conditions are favorable for the formation of contrails can reduce this radiative forcing [6]. One such study showed that during the height of the COVID-19 pandemic, air traffic declined by 72%, which corresponded with a drop in contrail radiative forcing by 0.7 W/m 2 [7]. Over just the US, air travel reduced by 36%, which resulted in a reduction of contrails by 22% [8].
The detection of contrails in satellite imagery is crucial to understanding their spatial and temporal extent and their impact on climate. Previous studies on contrail detection have found that contrail formation is overestimated by a factor of 2.0 to 3.5 and that the lifespan of contrails is overestimated by a factor of 17-45% [4]. Earlier work by Mannstein et al. 1999 developed an AVHRR approach for contrail detection [9] that was later adapted for SEVIRI [10]. This paper explores the application of a Convolutional Neural Network (CNN) with a specific U-shaped architecture, U-Net [11], to perform image segmentation on satellite imagery for the detection of contrails. Although the physical and spectral characteristics are quite different, the long and linear shape of contrails is similar to that of fractures in sea ice known as sea ice leads. The approach to detecting contrails presented here is not a new technology, but rather, the detection model used for the detection of sea ice leads that was developed by Hoffman et al. 2019 and 2022 [12,13] is reused-but in a new application. The architecture of the U-Net is the same; the detection model simply relies on training from a different dataset-contrail imagery rather than sea ice imagery. A similar approach has been used to detect ship tracks in satellite imagery [14]. Several other studies have demonstrated AI-based approaches to detecting contrails in satellite imagery, including Zhang et al. [15], Kulik [16], and Siddiqui et al. [17]. Recent work by Ng et al. [18] also uses a CNN to detect contrails in Geostationary Operational Environmental Satellites (GOES) Advanced Baseline Imager (ABI) imagery. The primary advantage demonstrated in this study is the value added by height retrieval by aligning contrail detections with the GOES ABI Cloud Top Height (ACHA) product [19].
Materials and Methods
The first step in the formation of a contrail detection method is to utilize a dataset of contrail images that can be used to train, test, and validate a detection method [18]. The detection architecture is based on the same AI architecture that was used to detect quasi-linear sea ice lead features as described by Hoffman et al. in 2021 [12] and updated in 2022 [13]. Both the contrail and sea ice lead detection methods uses a particular kind of convolutional neural network, U-Net, to perform image segmentation, which was first described by Ronneberger et al. in 2015 [11]. For sea ice lead detection, a single thermal imagery channel was used. The adaptation for contrail detection is to use brightness temperature difference imagery between the 11 µm and 12 µm channels. To process GOES ABI, the imagery is divided into several small regions (256 × 256 pixel samples) that are stitched together to cover the entire full disk. Contrails can be detected near processing region boundaries, and contrails that span a processing region boundary can be detected in both regions when the contrails are sufficiently long in each processing region (as long as the contrail segment in each sub-region maintains the spatial characteristics of a contrail). However, when a contrail spans a processing region boundary, the contrail segment on one side of the processing region boundary can be too short for contrail detection. To mitigate any detection bias near processing grid boundaries, the processing allows for a random amount of overlap for neighboring grids. Over a time series of imagery, processing region boundaries rarely overlap and therefore detection artifacts are avoided.
For this project, we initially used a previously published dataset of Landsat-based contrail detections developed by McCloskey et al. in 2021 [20]. The dataset was later replaced by the newer Ng et al. 2023 dataset with GOES-16 imagery [18]. The dataset contains a set of contrails identified by hand analysis, which we use as a truth mask. Because of the differences in sensitivity between ice and water clouds between 11 µm and 12 µm [21], we use brightness temperature differences to identify cirrus clouds. For the training, testing, and validation set, the GOES ABI 11 µm and 12 µm channels are Band 14 and Band 15. The original Ng et al. dataset [18] contains several thousand cases that can be segregated for training, testing, and validation. For our study, we have used a subset of 10,000 cases. To avoid overfitting, care was taken to avoid cross-contamination of the training, testing, and validation datasets. From the 10,000 cases, a ratio of 70%/20%/10% was used for training, testing, and validation. The Ng et al. dataset [18] contains a time series of images for contrail masks; however, for operational simplicity, our analysis only uses the central time period in each contrail time series. Example validation imagery is shown for three randomly selected cases in Figure 1: the RGB imagery (red = 8 µm, green = 11 µm, blue = 12 µm) is shown for reference in the left column, and the second column is the 12 µm minus 11 µm brightness temperature difference imagery that is used in the detection model. The third column is the Ng et al. [18] mask used as truth, and the final column is the U-Net detection model image segmentation solution. In general, the model detections are largely consistent with the truth masks. In top row, there appear to be some contrails oriented in the vertical direction that are errors of omission in the U-Net. However, in the middle and bottom rows, there may be some errors of omission in the hand analysis; some of the U-Net detections resemble contrails in the satellite imagery but escaped detection in the hand analysis mask. Overall, the U-Net does appear to positively identify the contrails that are apparent in the imagery. segregated for training, testing, and validation. For our study, we have used a s 10,000 cases. To avoid overfitting, care was taken to avoid cross-contamination of t ing, testing, and validation datasets. From the 10,000 cases, a ratio of 70%/20%/ used for training, testing, and validation. The Ng et al. dataset [18] contains a tim of images for contrail masks; however, for operational simplicity, our analysis o the central time period in each contrail time series. Example validation imagery for three randomly selected cases in Figure 1: the RGB imagery (red = 8 µm, gr µm, blue = 12 µm) is shown for reference in the left column, and the second colum 12 µm minus 11 µm brightness temperature difference imagery that is used in th tion model. The third column is the Ng et al. [18] mask used as truth, and the fina is the U-Net detection model image segmentation solution. In general, the mod tions are largely consistent with the truth masks. In top row, there appear to contrails oriented in the vertical direction that are errors of omission in the U-N ever, in the middle and bottom rows, there may be some errors of omission in t analysis; some of the U-Net detections resemble contrails in the satellite imager caped detection in the hand analysis mask. Overall, the U-Net does appear to p identify the contrails that are apparent in the imagery.
Results
One way to illustrate detection performance is in the form of a receiver o characteristic (ROC) curve which is provided in Figure 2; generally, curves that the upper left corner are good, and a detection model near the black line is indi low skill. Another way to assess skill is to measure the area under the curve (A and the results we achieved have an AUC-PR of 73.9, which compares favorab AUC-PR of 72.7 achieved by Ng et al. [18].
Results
One way to illustrate detection performance is in the form of a receiver operating characteristic (ROC) curve which is provided in Figure 2; generally, curves that are near the upper left corner are good, and a detection model near the black line is indicative of low skill. Another way to assess skill is to measure the area under the curve (AUC-PR), and the results we achieved have an AUC-PR of 73.9, which compares favorably to the AUC-PR of 72.7 achieved by Ng et al. [18]. The image segmentation results form an image of probabilities from 0 to 1, encoded as bytes ranging from 0 to 255. To make deterministic evaluations, a threshold to distinguish between contrail or clear must be chosen. For our application, we selected a value of 65 as the detection threshold that achieves the maximum F1 score. At a detection threshold of 65 (or 0.254 if encoded 0 to 1), the probability of detection is 0.508 and the false alarm rate is 0.460. The F1 Score, also known as the Dice coefficient, is 0.524 and may be the best measure of success for the detection model, as it captures a balance of omission and commission error. A lower detection threshold could have been chosen to achieve a higher rate of true positives but at the expense of a higher false positive detection rate. However, the image segmentation results tend to be largely bimodal with the majority of the solutions being near zero or near 250; relatively few solutions are near the value of the detection threshold. Despite some imperfections in the validation dataset and a rather conservative detection threshold, the detection metrics of the technique are overall very good, as shown in Table 1. The image segmentation results form an image of probabilities from 0 to 1, encoded as bytes ranging from 0 to 255. To make deterministic evaluations, a threshold to distinguish between contrail or clear must be chosen. For our application, we selected a value of 65 as the detection threshold that achieves the maximum F1 score. At a detection threshold of 65 (or 0.254 if encoded 0 to 1), the probability of detection is 0.508 and the false alarm rate is 0.460. The F1 Score, also known as the Dice coefficient, is 0.524 and may be the best measure of success for the detection model, as it captures a balance of omission and commission error. A lower detection threshold could have been chosen to achieve a higher rate of true positives but at the expense of a higher false positive detection rate. However, the image segmentation results tend to be largely bimodal with the majority of the solutions being near zero or near 250; relatively few solutions are near the value of the detection threshold. Despite some imperfections in the validation dataset and a rather conservative detection threshold, the detection metrics of the technique are overall very good, as shown in Table 1.
An advantage of using thermal IR bands on ABI is that the technique can be applied day or night. The main advantage of using the GOES ABI is that it provides nearly complete coverage over the Western hemisphere every 10 min at a nominal 2 km resolution. An example image of contrail detection over the Gulf of Mexico in GOES imagery is shown in Figure 3. In this example, contrail detections are overlaid on an image of the 11 µm brightness temperature image from 0420 UTC (or 10:20 PM local time) on 10 March 2023. The contrails are color-coded by the GOES ABI Cloud Top Height (ACHA) product [19]. At that time, the native ACHA product had a 4 km nominal resolution, so the product is interpolated to match the 2 km resolution of the native ABI brightness temperature imagery. Later in March of 2023, a 2 km resolution ABI Cloud Top Height product became operational [19] to match the 2 km resolution of the imagery so that interpolation will not Remote Sens. 2023, 15, 2854 5 of 9 be needed in future work. Additionally, the ACHA height is given in km; for aviation applications, it is converted to feet, and for each contrail, the height reported is the highest height retrieval within each contrail object. An example image of contrail detection over the Gulf of Mexico in GOES imagery is shown in Figure 3. In this example, contrail detections are overlaid on an image of the 11 µm brightness temperature image from 0420 UTC (or 10:20 PM local time) on March 10, 2023. The contrails are color-coded by the GOES ABI Cloud Top Height (ACHA) product [19]. At that time, the native ACHA product had a 4 km nominal resolution, so the product is interpolated to match the 2 km resolution of the native ABI brightness temperature imagery. Later in March of 2023, a 2 km resolution ABI Cloud Top Height product became operational [19] to match the 2 km resolution of the imagery so that interpolation will not be needed in future work. Additionally, the ACHA height is given in km; for aviation applications, it is converted to feet, and for each contrail, the height reported is the highest height retrieval within each contrail object.
Discussion
Another GOES contrail detection case is presented in Figure 4. On 14 March 2023 at 2100 UTC, true color imagery is presented in Panel A and the corresponding 11 µm brightness temperature imagery is presented in Panel B. It is interesting to note that the contrails are barely distinguishable in the true color image against the snow-covered background; however, the contrails are readily apparent in the thermal IR. The GOES Cloud Mask and
Discussion
Another GOES contrail detection case is presented in Figure 4. On 14 March 2023 at 2100 UTC, true color imagery is presented in Panel A and the corresponding 11 µm brightness temperature imagery is presented in Panel B. It is interesting to note that the contrails are barely distinguishable in the true color image against the snow-covered background; however, the contrails are readily apparent in the thermal IR. The GOES Cloud Mask and Cloud Top Pressure products are shown in Panels C and D, respectively. The contrail detection product is shown in Panel E, where it is interesting to note that the western and northern contrails are detected with a maximum height of between 20-25 thousand feet (orange), but the eastern and southern Wisconsin contrails are not detected by the GOES cloud mask (black). Finally, as validation, a photograph is shown looking northward from southern Wisconsin, taken at 4:05 P.M. local time (2105 UTC). The primary advantage of the detection method is that it uses techniques previously demonstrated to be effective in satellite remote sensing applications [12,13], and that it can be applied to this new application without changing the model architecture. The technique also is also relatively simple to run in an operational sense because the detections are based on a single brightness temperature difference image rather than more complex multi-spectral time series of images [18]. One weakness of using a simple brightness temperature difference image for contrail detection is that there are non-cloud features that can have similar spatial characteristics to contrails and also have a brightness temperature difference between 11 µm and 12 µm. Rivers and coastlines can cause apparent false detections; as an example, in Figure 4, the Mississippi River along the Illinois/Iowa border shows a contrail detection not associated with any cloud mask detections (black in Figure 4E). This was a larger issue in early iterations of the detection method that used Landsat based training imagery [20]; however, this has become less of an issue after training with a more extensive training dataset [18]. Because ACHA height retrievals are limited to cloudy regions identified by the ABI Cloud Mask [19], non-cloud false contrail detections can be filtered out by using the ACHA product. However, this does not remove all false contrail detections because there are cases where natural clouds are misclassified as contrails. One other point to make is that contrail detections are often longer and wider features than are detected in the cloud mask or cloud height products. This is because the cloud mask and height product detection techniques are based primarily on spectral differencing tests. In contrast, the U-Net used for the contrail detection model is based on a combination of spatial and spectral characteristics. Features that have low patches of low spectral contrast may escape continuous detection in the cloud mask but may be identified as a continuous contrail due to the spatial characteristics identified by the U-Net.
Conclusions
The premise that the same AI-based detection method that was used for the identification of sea ice leads in satellite imagery [12,13] can also be used for the detection of contrails in Geostationary satellite imagery has proven to be quite successful. Future iterations may focus on a multi-channel approach, using a time series of imagery and/or using
Conclusions
The premise that the same AI-based detection method that was used for the identification of sea ice leads in satellite imagery [12,13] can also be used for the detection of contrails in Geostationary satellite imagery has proven to be quite successful. Future | 4,646.6 | 2023-05-31T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Understanding of an Iceberg Breaking Off Event Based on Ice-Front Motion Analysis of Amery Ice Shelf, Antarctica
: On 26 September 2019, a massive iceberg broke off the west side of the Amery Ice Shelf (AIS) in East Antarctica. Since 1973, the AIS calving front has steadily advanced at a rate of 1.0 km yr − 1 . However, the advancement rate of the central portion of the AIS increased dramatically during 2012–2015, which indicates a velocity increase prior to the calving event. Eight calving front locations from 1973 to 2018 were mapped to investigate the advancement rate of AIS over the entire observational period. Additionally, the propagation of rift A was observed unstable from 2012 to 2015. The westward propagation rate of rift A1 increased to 3.7 km yr − 1 from 2015 to 2017, which was considerably faster than the other rifts near the AIS calving front. The increased advancement rate and the increasing propagation magnitude of at least one active rift appear to be precursors of this large calving event.
Introduction
Iceberg calving, followed by basal melting, is the primary cause of mass loss of the Antarctic Ice Sheet. As a floating extension of grounded glaciers over the ocean, ice shelves become one of the primary sources of new icebergs [1][2][3]. Alley et al. revealed that iceberg calving increases with the along flow spreading rate of an ice shelf [4]. Hence, significant ice front advancement can be an important indicator for potential iceberg calving events. Moreover, most of the calving occurs at the front of ice shelves along the rift fractures that extend through the total thickness of the ice. These ice-shelf rifts can propagate horizontally over decades before a significant iceberg calving event [5]. Bassis et al. also stated a clear relationship between rift propagation processes and flow dynamics over a wide range of environmental conditions, flow regimes, and spatial domains [5]. Hence, the characterization of the propagation of rifts on ice shelves plays an important role in understanding their ice stability and the influence of flow dynamics on ice-shelf calving.
Some studies have been conducted to measure the advancement of the Amery Ice Shelf (AIS) in East Antarctica to estimate its calving cycles [6,7]. Fricker et al. suggested that the AIS advanced seaward at a rate of 1.3-1.4 km yr −1 during 1936-2000 [6]. Zwally et al. revealed that the advancement of the central portion of the AIS was 1.03 ± 0.04 km yr −1 from 1978 to 1994 [8]. Walker et al. observed that the AIS calving front had steadily readvanced seaward in the following 50 years after its last major calving event that occurred in the early 1960s [7,9]. Darji et al. also examined the advancement rate of the AIS calving front but presented it in the areal changes per year, where the highest advancement rate at 517 km 2 yr −1 was observed in 2012-2013 between 2000 and 2017 [10].
On 26 September 2019, a massive iceberg, approximately 30 km by 60 km, broke off from the AIS calving front where the rift system is colloquially known as the 'Loose Tooth' [11]. This rift system, illustrated in Figure 1, consists of two longitudinal-to-flow rifts (A0 and B) and two transverse-to-flow rifts (A1 and A2), where rifts A0, A1, and A2 comprise rift A of Fricker et al. [11]. Darji [12]. However, the possible changes of the calving front advancement rate as well as the propagation rate of rifts prior to this significant event have been undocumented and need to be studied to better understand this calving event. The calving front location [13] (denoted as CFL afterwards) was used in this study to investigate the changes in advancement rates of the AIS calving front. This study also examines the propagation rates of four active rifts near the ice calving front over this period.
Remote Sens. 2021, 13, x FOR PEER REVIEW 2 of 6 and B) and two transverse-to-flow rifts (A1 and A2), where rifts A0, A1, and A2 comprise rift A of Fricker et al. [11]. Darji [12]. However, the possible changes of the calving front advancement rate as well as the propagation rate of rifts prior to this significant event have been undocumented and need to be studied to better understand this calving event. The calving front location [13] (denoted as CFL afterwards) was used in this study to investigate the changes in advancement rates of the AIS calving front. This study also examines the propagation rates of four active rifts near the ice calving front over this period.
Materials and Data
To determine the advancement rate of AIS, eight CFLs were mapped from 1973 to 2018 using 15 Landsat images ( Table 1). The LandsatLook image product, inclusive of the natural color image and thermal image, both with geographic reference, was used for this study.
Materials and Data
To determine the advancement rate of AIS, eight CFLs were mapped from 1973 to 2018 using 15 Landsat images ( Table 1). The LandsatLook image product, inclusive of the natural color image and thermal image, both with geographic reference, was used for this study. Due to the surface temperature contrast between ice and open water, brightness temperature differences in the Landsat thermal channel were used to delineate the CFLs on the AIS. The LandsatLook thermal images provide the brightness temperature at full resolution and are resampled to 30 m, except the year of 1973, which was resampled at 60 m resolution. The brightness temperature was calculated from the calibrated scaled digital value for the thermal channel. An adaptive threshold value based on the primary histogram peak of each LandsatLook thermal image was used to create an ice surface mask to differentiate ice from open water. The flowlines presented on the eastern and central portions were derived from a gross error-free digital elevation model (DEM) from combined ERS-1 radar altimetry data and other contour data of the Antarctic [14]. The flowlines on the west were created manually along the ice flow direction.
The CFLs along all nine flowlines for each time interval were delineated from the ice surface masks and combined into a single vector file. All the delineated CFLs were visually verified based on the LandsatLook natural-color image acquired on the same date as the thermal image. The upstream ends of the studied rifts are identified through visual interpretation and mapped as vector points. The data processing and analysis were conducted using the combination of ArcGIS 10.6.1 and ENVI 5.4.
Results
Nine cross-sections were created paralleling the dominate flowlines along each calving front position (Figure 1) which divided the AIS calving front into three portions: eastern (flowlines #1-3), central (flowlines #4-5), and western (flowlines #6-9). The advancement rate between two sequential images was computed as the ratio of its measured advance distance of the calving front at this location to the time interval between image acquisitions.
Overall, the eastern portion of AIS advanced at a stable rate of approximately 1.0 km yr −1 over much of the study period ( Figure 2). This observation agrees with the previous estimates [6,8] Figure 2). The advancement rate of AIS along the flowline #4 significantly increased to 2.9 km yr −1 . The western portion of the AIS calving front sped up to 0.7 km yr −1 from 2017 to 2018, although these rates remained slower than those in the eastern and central portions.
Over the study period, the rifts A0 and B propagated parallel to the ice flow direction. In general, these rifts were observed to propagate in directions parallel to each other during 1988-2018 ( Figure 3A). The year 1973 was excluded from the rift propagation analysis as no rift could be observed on the AIS prior to the late 1980s [7]. Two other rifts, rift A1 and rift A2, were initiated at the upstream end of rift A0 in the mid-late 1990s and propagated along a crossflow direction. No explicit initiation date has been documented yet [6,7,11]. Over the study period, the rifts A0 and B propagated parallel to the ice flow direction. In general, these rifts were observed to propagate in directions parallel to each other during 1988-2018 ( Figure 3A). The year 1973 was excluded from the rift propagation analysis as no rift could be observed on the AIS prior to the late 1980s [7]. Two other rifts, rift A1 and rift A2, were initiated at the upstream end of rift A0 in the mid-late 1990s and propagated along a crossflow direction. No explicit initiation date has been documented yet [6,7,11].
The distance separating the rifts B and A0 was generally stable from 1988 to 2006 but continued to increase from 2006 to 2018. The A2-A0 distance also increased over the 2006-2018 period whereas the B-A2 distance remained generally stable along the eastern side of the AIS calving front ( Figure 3B) from 2006 to 2018. Rifts A1 and A2 are located on opposite sides of rift A0, and rift A1 propagated further from rift A0 than did A2 prior to 2003, as indicated in the blue dots in Figure 3A. The distance separating the rifts B and A2 also remained generally stable over the study period. However, the distances between the upstream ends of the rifts A1 and A0, as well as A2 and A0, increased over time. The separation of A1 and A0 continued to increase after its initiation due to its propagation in the crossflow direction.
Discussion
Besides a stable rate of approximately 1.0 km yr −1 observed in the east portion of AIS over much of the study period, this study captured significant changes in advancement rates in the central portion of the AIS in the CFLs of 2015, 2017, and 2018. Four to five years prior to the 2019 calving event, the advance of the central portion of the AIS calving front accelerated relative to its eastern and western portions. A significant advancement The distance separating the rifts B and A0 was generally stable from 1988 to 2006 but continued to increase from 2006 to 2018. The A2-A0 distance also increased over the 2006-2018 period whereas the B-A2 distance remained generally stable along the eastern side of the AIS calving front ( Figure 3B) from 2006 to 2018. Rifts A1 and A2 are located on opposite sides of rift A0, and rift A1 propagated further from rift A0 than did A2 prior to 2003, as indicated in the blue dots in Figure 3A. The distance separating the rifts B and A2 also remained generally stable over the study period. However, the distances between the upstream ends of the rifts A1 and A0, as well as A2 and A0, increased over time. The separation of A1 and A0 continued to increase after its initiation due to its propagation in the crossflow direction.
Discussion
Besides a stable rate of approximately 1.0 km yr −1 observed in the east portion of AIS over much of the study period, this study captured significant changes in advancement rates in the central portion of the AIS in the CFLs of 2015, 2017, and 2018. Four to five years prior to the 2019 calving event, the advance of the central portion of the AIS calving front accelerated relative to its eastern and western portions. A significant advancement rate change was observed in a stable flowline (#4) one year prior to the AIS calving event in 2019, which indicates a velocity increase prior to the calving event ( Figure 2).
All the studied rifts were translated in the ice flow direction due to the AIS's consistent advancement. However, the four studied rifts propagated at different rates towards the calving front ( Figure 3C). The propagation rates of rifts A2 and B remained stable over the entire observational period at~0. Overall, the propagation rates of rift A1 and A2 estimated by this study are comparable with the existing measurements [10,12]. The slow decrease in the propagation of rift A2 from 2003 to 2012 agrees with the changing pattern in the propagation rate during the same time duration observed by Zhao et al. [12]. Because of the large differences in spatial resolution, Darji et al.'s propagation rates [10] derived using MODIS imagery (250 m) have larger uncertainties compared to the rates computed in this study using Landsat imagery.
Conclusions
This work examined changes in the advancement rate of the AIS calving front and the propagation rate of four active rifts during 1973-2018 as possible leading indicators of the recent calving event that occurred on 26 September 2019. Overall, prior to 2012, the AIS was observed to be steadily advancing at a rate of approximately 1.0 km yr −1 . However, from 2012 to 2015, significant changes in the annual advancement, particularly in the central portion of the AIS calving front, were observed. The propagation of two studied rifts accelerated during 2015-2017. It appears that the advancement rate of the AIS calving front increased 4-5 years prior to the large calving event in 2019, while its rift propagation accelerated 2-3 years prior to the 2019 calving event. The increased advancement rate and the propagation magnitude of its active rifts may be used as an indicator for potential iceberg calving events on the AIS. | 3,075.8 | 2021-12-08T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Regular Paper Exploring Japanese students’ e-learning habits
Since 2010, both universities in which the authors teach have integrated pc -based e-learning or mobile learning programs into several English courses. Students’ online performances are evaluated and used in determining the grades for each course. The data accumulated over the years indicates both changing and unchanged e-learning habits among Japanese university students. By analyzing the accumulated data, coupled with information obtained from questionnaires, we conclude that while some e-learning habits remain unchanged, several new learning habits have emerged. The unchanged e-learning habits include: predominant use of the iPhone, which remains the most popular mobile learning tool; a deadline rush for completing e-learning tasks; a preference for engaging in e-learning in the evening and at night; e-learning diligence, i.e., frequency of participation and duration of participation, show no change. In recent years some new trends have appeared in students’ e-learning: more students entering university have previous experience with e-learning, and they are less concerned about the cost of mobile phone Internet connection.
Douglas jarrell
Nagoya Women's University, Japan<EMAIL_ADDRESS>2010, both universities in which the authors teach have integrated pc-based e-learning or mobile learning programs into several English courses.Students' online performances are evaluated and used in determining the grades for each course.The data accumulated over the years indicates both changing and unchanged e-learning habits among Japanese university students.By analyzing the accumulated data, coupled with information obtained from questionnaires, we conclude that while some e-learning habits remain unchanged, several new learning habits have emerged.The unchanged e-learning habits include: predominant use of the iPhone, which remains the most popular mobile learning tool; a deadline rush for completing e-learning tasks; a preference for engaging in e-learning in the evening and at night; e-learning diligence, i.e., frequency of participation and duration of participation, show no change.In recent years some new trends have appeared in students' e-learning: more students entering university have previous experience with e-learning, and they are less concerned about the cost of mobile phone Internet connection.
Keywords: e-learning, Japanese students, e-learning habits, tendencies, changes and unchanged habits
introduction
E-learning has witnessed explosive development in the last two decades.Cassette players, black and white tvs, and overhead projectors (ohp), which were common in foreign language classes in 1980s and 1990s, have almost disappeared.Correspondencestyle courses relying on textbooks and radio or tv broadcasts have largely been replaced by digital materials.Multi-media learning materials such as audio, images, videos, and texts can now be easily transmitted via the Internet.Along with advances in hardware and networks, new e-learning software and systems have been developed.
Since the emergence of the iPhone in 2007, smartphones now have become the norm, with almost every learner possessing one.Accordingly, mobile learning has become an extremely important branch of e-learning.As Gros & Garcia-Peñalvo (2016) point out, "future e-learning should encompass the use of Internet technologies for both formal and informal learning by leveraging different services and applications." Japan is a land of sophisticated information technology with one of the world's fastest mobile networks.E-learning is being carried out in all levels of schooling, especially in higher education.Many Japanese universities have integrated e-learning into their curricula.PowerPoint is no longer the only most common tool for "e-learning" as teachers move on to other forms of e-learning to assist their teaching, to assign and collect homework, and to contact students.Moodle, Quizlet, and podcasts are among the applications that are commonly used these days by language teachers.In addition, the Japanese government has set a goal of "one student, one tablet" for all schools by 2020 (Ministry of Education, Culture, Sports, Science and Technology -Japan (mext), 2015), thereby enhancing the e-learning hardware environment.Furthermore, active learning, an educational mode that involves students in the learning process more directly, is strongly supported by mext.
According to Chen, et al. (2010), students with e-learning experiences have been shown to perform better in active and collaborative learning than students with only traditional class-learning.E-learning students are also more likely to use deep approaches of learning such as higher-order thinking and reflective learning General "learning habits" refer to modes used by learners to process, organize and interact with learning materials (Kolb, 1984;Sadler-Simit, 1996).Gregorc (1985) asserted that almost 95% of individual learners have specific learning habits.In terms of e-learning, habits are the students' common styles of online behaviours.There are numerous e-learning research articles and reports available online or offprint, but most of the research focuses on specific e-learning systems and individual implementation of e-learning projects.Very few have discussed autonomous learners' learning styles, their habits and the evolution of these styles and habits over time.Saeed et al. (2009) did discuss students' learning habits and their technology preferences, but their research was a case study focused on students' learning habits with blogs, podcasts and social bookmarks, not with an entire e-learning course.Kim, et al (2011) compared sns habits of Korean students and American students from a viewpoint of cultural differences.However, the sns in this research was not for learning purposes.Chang, et al. (2009) developed a mechanism which can -identify students e-learning habits.However, the system was only tried out with elementary school students.Successful system application to university students has yet to be reported.
Among very few research papers discussing Japanese university students' e-learning and habits, Goda et al. (2013) categorized 7 types of e-learning behaviours after analysing data from a 15-week English course.As some "types" of e-learning behaviour were defined based on data from a very small number of students, they cannot be generalized for Japanese university students' e-learning habits.
Research purposes
In order to provide a general picture of Japanese students' e-learning habits, the authors started to collect relevant e-learning data eight years ago when they started integrating e-learning into the English syllabus.pc-based e-learning and mobile phone-based mobile learning were introduced to both regular courses and special English projects.Up to March 2018, 7535 students at the main campus of Shimane University had used an English e-learning program called Gyuto-e and about 850 students had used another e-learning program called alc NetAcademy2.The mlp (Mobile Learning Project), a mext grant-supported mobile English learning project, was carried out at both Shimane University and Nagoya Women's University with an approximate total of 700 students registering from 2013 through May 2018.Students' learning data in these three projects has been carefully collected and stored on server.
As the administrators of three different e-learning programs (see Figure 1), the authors are authorised to access all logs and other learning records on the server systems where the above three projects reside.Besides, the authors conduct surveys every year to ask students to self-report their e-learning activities and perceptions.1.What digital devices do students use to complete e-learning tasks?Computers or smartphones?2. When students have a deadline, when do they actually complete an e-learning task? 3. Do students these days have more pre-university e-learning experiences than students in the past?
Method
Shimane University is a national university in western Japan.Every year about 1000 new students are enrolled to six different faculties of this university.As a part of general education, all of the first-year students are required to take an e-learning blended toeic-oriented course, with the exception of medical students.It is mandatory for students to complete online assignments every week on an lms (Learning Management System).Students' learning achievements are measured using eight online quizzes and two summary tests.
The data we focus on include students' login and logout time, time length of online participation, answer accuracy, and the system functions which are frequently used.
Gyuto-e is an online English learning program developed by an it company in Hiroshima, Japan.The program itself is a user-friendly lms with a database of 40 reading passages, 800 listening questions and 740 grammar questions.All of the questions are multiple choice.After the answer is submitted, students can choose to view audio scripts, re-listen to the audio, and read the detailed explanations if it is a grammar question.On the administration side, the teacher can view a variety of students' learning data and manage the learning process.
Gyuto-e has been integrated into a mandatory course for first-year students from five different faculties at Shimane University since 2010.Eight years of student learning data, including online test results, are stored on the server.
Gyuto-e is basically a pc-based English learning program, although it is compatible with most smartphones.
alc Net Academy 2, developed by Hitachi Solutions Ltd., is another English e-learning program used at Shimane University.Students at this university use various courses such as life sciences, medicine, and toeic preparation provided by the program.Students' learning outcomes are checked with multiple choice and fill-in-the-blank questions.As with Gyuto-e, students' learning performances are recorded in the system, and teachers can view overall class data or look into an individual student's learning history.
The Mobile English Learning Project (mlp) is a mobile language learning project jointly implemented by Shimane University and Nagoya Women's University.The project began in 2011 and sends various types of English learning materials to students' mobile phones twice a week.These materials mainly consist of short news and cultural reports, personal anecdotes, toeic study materials, and general knowledge quizzes.After students read the materials, they can choose to respond by taking short comprehension quizzes or giving feedback on what they read.
In terms of the first two English language e-learning systems, we look at concentrated login time, total online learning time and accuracy of the answers.
The mlp server also records students' access time and answer accuracy.When the students are asked to register with this project, they report their expectations about, perceptions of and preferences concerning mobile language learning.In order to collect reliable data in different years, the survey questions have remained almost identical.In the registration form, we include the following questions.1.Does your mobile phone contract have an unlimited data plan?2. What kind of device will you use to receive mobile English learning content? 3. Have you ever used a mobile phone for learning before?4. At what time of day do you prefer to receive English learning materials on your mobile phone?
Data results
The Mobile Learning Project (mlp) was set up as a collaborative mobile learning project between Shimane University and Nagoya Women's University.Every year, we ask students to voluntarily register with the project so that they can receive English learning materials on their mobile phones.The registration page is in fact a questionnaire to determine their e-learning habits.This graph (see Figure 2) shows that in the two universities, the iPhone is the still most common device for mobile learning, with 53-72% of the total survey participants relying on it.19-35% of students use Android smartphones.Just under 20% of students prefer not to use mobile devices and opt for the PC to receive small chunks of learning materials.
The following Figure 3 clearly shows that unlike 5 years ago, Japanese students now tend to use a limited data plan for their smartphones.Unlimited plan usage was 88% in 2013 but has declined to 39% in 2018.The data in Figure 5 only goes back to 2015, which was the first year students were asked if they had any experience using a mobile phone for learning prior to entering university.The number of students who have used a mobile phone for learning is gradually increasing.In 2018, 87% of first-year registrants reported that they had used mobile phones for learning before entering university.
Figure 5. Pre-university experience using a mobile phone for learning
To conclude our findings from six years of data from registrations for a mobile learning project, the following two learning habits have remained unchanged.Evening and night are still the preferred mobile learning times, and the iPhone remains the most commonly used tool for mobile learning.
However, we have also noticed some changes in students' mobile learning styles: The number of students who use an unlimited data plan is decreasing, and more students enter university with prior mobile-learning experiences.
The above is the self-report data from students who participated in mlp projects in dif- study intensively online the day before class day, which is the deadline for the online assignment.It is natural that that the largest number of accesses is on Thursday when students are required to take an online test in the Thursday face-to-face class.The second highest number of accesses is on Wednesday, the deadline for the assignment It is clear that when given an e-learning task and a deadline, students tend to go online right before the deadline.
Students rush before the deadline, and they relax after the deadline passes.The two days after the class are when we record the fewest accesses.Interestingly, we found that on Sundays more students go online than Saturdays.This may be because the weekend is coming to an end, and students are starting to think about the coming week.Students were required to complete 22 units in a toeic course in 15 weeks.We collected data from classes in the same course taught by the same teacher with the same syllabus to same level of students between 2014-2017.Here are the results (see Figure 9).We did not find any major changes in terms of total learning time and total logins in the past four years.This indicates that students' e-learning efforts for same e-learning tasks have remained unchanged over the past few years.
Discussion
In this section, the authors will discuss the possible reasons for the observed e-learning habits and changes, and how online learning can be made more effective by taking these habits and changes into account.
The iPhone is used more widely in Japan than in any other country in the world.Compared to a 20.32% overall share in the world market, the iPhone has a 66.5% share in Japan, while the Android only has a 32.57% share (aeradot, 2017).Our surveys to the students in two universities confirmed this: 72% of mlp project registrants in 2017 and 68% in 2018 reported that they have iPhones.
This difference should remind e-learning teachers in Japan that when designing mobile language learning apps, the apps must be usable on iPhones.If the budget is too small to support the development of two different kinds of os, one for the iPhone and one for Android phones, then iPhone app development should be given priority.
At first mobile data was expensive and Wi-Fi was not widely available, so students were concerned about the cost of their mobile data.They tried to avoid receiving any large audio or video files on their mobile phones.By 2013, 88% had decided to contract for an unlimited data plan which enabled them use as much data as they wanted.However, the portion of students who had unlimited data plans kept decreasing every year.In 2018, only 39% students signed such a plan with a mobile network provider.This may be due to the fact that Wi-Fi now covers most areas of the Shimane University campus and students' apartments, so students no longer need unlimited mobile data from providers, or they just need a limited amount of mobile data when they are away from Wi-Fi.What is more,
Total online learning hours per learner
Total logins per learner Wi-Fi is usually much faster and much more stable than mobile data.With these results in mind, e-learning system designers no longer need to be cautious about using videos and pictures.They can now develop materials without concerns about the students' ability to access these materials.
As to the appropriate time to be sent e-learning materials, the majority of students responded that evening and night are the preferred times.From this, it can be inferred that this is the preferred time for students to study.A plausible explanation for this preference may be that, in most cases, Japanese universities students have no classes at night.They have more time at their disposal and can learn online at their own pace.On the other hand, the afternoon is ranked as the least desirable time to receive e-learning materials, and hence the time of day when students are least likely to do e-learning.This is likely due to their schedule: they are occupied with classes or part-time work in the afternoon.
The increase in the number of students who had pre-university e-learning experiences is no doubt the result of guidelines set out by mext.mext has very detailed guidelines for information education in elementary and high schools (mext, 2008).Students are supposed to have basic IT skills before they are enrolled in a university.In addition, 86.1% of senior high school students are reported to have smartphones, and more than 83% access the Internet through smartphones rather than by pc.(Benesse Educational Research and Development Institute, 2014).Since the number of educational smartphone apps is increasing, it is quite understandable that more and more students have mobile learning experiences before they enter university.Study habits with an integrated e-learning syllabus show certain clear tendencies.Students recognize that, as a part of the course, their online learning performance will be evaluated.They view every e-learning task as a type of homework and go online on the day of the deadline to get the tasks done.This is in spite of the fact that in the first author's blended learning class, students were repeatedly told to avoid procrastination and last-minute e-learning.Procrastination and deadline rush have existed every year in every blended class.This phenomenon has been noted in the research of Xu (2015) and Goda et al (2013).Actually, students may benefit from such procrastination in the post mini-test as they study right before the test day.However, in the long run, deadline rush negatively affects the learning outcomes (Milgram & Tenne, 2000).Indeed, in the authors' classes, students who always complete e-learning tasks well before the deadline had better scores on both mini-tests and final exams.
Once students complete their online homework and take the online post-test, they seem reluctant to study online again until they feel the new deadline approaching.This explains why in both e-learning programs being used at the first author's university, the number of accesses is lowest on the day after the face-to-face class and then gradually rises as the deadline approaches.Teachers should be aware of such e-learning habits.In spite of the deadline rush tendency of most university students, we still need to set a deadline for every e-learning task.Otherwise students may not learn at all.Grit and self-discipline together with admonitions by their teachers may only work for a small group of very high motivated students (Wang & Smith, 2013).
It has been reported that that the time spent on task and frequency of participation are key factors for successful e-learning (Morris, Finnegan & Wu, 2005).However, over the past 4 years, no changes were observed in students' online participation.In a designated e-learning course, the average students' total online learning time and total login frequencies remained essentially the same from 2014 to 2017.There has not be a change in the behavior of students enrolled in recent years; they do not demonstrate any more diligence than their seniors in terms of e-learning, so teachers have no reason to increase the quantity and difficulty of assigned e-learning tasks.
conclusion
By analyzing the long-term and follow-up data collected from two pc-based e-learning projects and one mobile phone-based e-learning project, the authors have found that in Japanese universities, there are twice as many iPhones users as Android users.Evening and night are perceived to be the optimal time for mobile learning, and perhaps for pc-based e-learning as well.When given a mandatory learning task, students tend to study most intensively right before the deadline.The cost of mobile data is no longer a big concern for mobile learning users due to the expanding availability of Wi-Fi and lower mobile data prices.More students had learning experiences with mobile phones before entering university.For a syllabus integrated e-learning program, no changes were witnessed in terms of students' e-learning diligence The findings obtained from the two Japanese universities may be applicable to other universities in Japan.Some of e-learning habits of Japanese students may also exist in students of other countries.It is clear that when e-learning is employed, learners' learning devices, learners' learning styles, their preferred learning time and other learning habits should be taken into consideration.Fully understanding students' e-learning habits should help e-learning teachers design an appropriate and effective e-learning program.These findings, although drawn from language e-learning courses, may provide clues for e-learning in other educational areas as well.
The above findings describing common e-learning styles and habits of Japanese university students will be also useful in developing personalized e-learning systems using design system algorithms (Klašnja-Milićević et,al. 2011;Wang & Wu, 2011).This is another contribution that this research can make to the e-learning field.
Figure 2 .
Figure 2. Most-frequently used device for receiving mobile learning materials Note: The question "On which device are you going to receive mobile learning materials" was added to the registration in 2014.
Figure 3 .
Figure 3. Use of mobile phone unlimited data plan
Figure 4 .
Figure 4. Most-favoured time to receive learning materials Figure 6.Syllabus integrated e-learning program -Gyuto-e
Figure 7
Figure7shows Gyuto-e login data of one 2017 class taught by the first author.There were 36 students in this class.They needed to complete 40 listening questions and about 30-40 grammar questions every week by 5 p.m. the day before their face-to-face class.By analyzing the class login data stored on the server, we can see that most students only
Figure 8 .
Figure 8. Learning record of syllabus-integrated e-learning program -ALC Academy 2
Figure 9 .
Figure 9.Total logins and total learning time for an online TOEIC course over four years | 4,935 | 2018-12-31T00:00:00.000 | [
"Education",
"Computer Science"
] |
Dilution of Ferromagnets via a Random Graph-based Strategy
The dynamics and behavior of ferromagnets have a great relevance even beyond the domain of statistical physics. In this work, we propose a Monte Carlo method, based on random graphs, for modeling their dilution. In particular, we focus on ferromagnets with dimension $D \ge 4$, which can be approximated by the Curie-Weiss model. Since the latter has as graphic counterpart a complete graph, a dilution can be in this case viewed as a pruning process. Hence, in order to exploit this mapping, the proposed strategy uses a modified version of the Erd\H{o}s-Renyi graph model. In doing so, we are able both to simulate a continuous dilution, and to realize diluted ferromagnets in one step. The proposed strategy is studied by means of numerical simulations, aimed to analyze main properties and equilibria of the resulting diluted ferromagnets. To conclude, we also provide a brief description of further applications of our strategy in the field of complex networks.
I. INTRODUCTION
The study of diluted ferromagnets [1-6] dates back to several years ago, following two main paths sometimes overlapping, i.e. the statistical mechanics approach to lattices, and the graph theory approach to networks [7,8]. A notable result, coming from their combination, is the modern network theory [9][10][11]. In particular, the latter extends the classical graph theory to the analysis of networks characterized by non-trivial topologies and containing a big amount of nodes. So, the role of statistical mechanics is to offer methods and strategies for investigating the properties and the dynamics of these 'complex networks' [12,13]. Usually, investigations on ferromagnets are performed using the Ising model [14], mainly because the latter constitutes a simple and powerful tool for studying phase transitions and further applications, also beyond the domain of statistical mechanics (e.g. Data Science [15] and Machine Learning [16,17]). Despite its simplicity, the Ising model becomes, itself, a very hard problem (not yet solved) when studied in dimensions greater than 3. In those cases, the Curie-Weiss [18,19] model allows to approximate its behavior, with the advantage to be also analytically tractable (i.e. it can be exactly solved for any size of system). As result, in some conditions, solving the Ising model might require to perform numerical simulations using Monte Carlo methods [20]. For instance, the Metropolis algorithm [21] constitutes one of the early, and most adopted, strategies for simulating thermalization processes over a lattice. This latter algorithm is based on the optimization of the Hamiltonian function representing the energy of the system. Notably, the Hamiltonian of the Ising model reads where the summation is extended to all the nearest neighbors (i, j) in the lattice (realized with periodic boundary conditions, so actually becoming, in topological terms, a toroid).
As result, the value of the Hamiltonian 1 depends on the set s, i.e. the configuration of spins σ in the lattice. Accordingly, the two ground states of the system correspond to the spin configurationsŝ + = [+1, +1, . . . , +1] andŝ − = [−1, −1, . . . , −1]. Therefore, considering a lattice with N sites, and starting with a random configuration s x ∈ S, defined as s x = [σ x 1 , ..., σ x N ], the Metropolis algorithm leads the system towards a state of equilibrium which, for a temperature T = 0, corresponds to one of the two ground states. This algorithm is based on two simple steps 1. Randomly select a site i, and compute the local ∆E associated to its spin flip 2. IF (∆E ≤ 0): accept the flip; ELSE: accept the flip with probability e −∆E kT repeated until the equilibrium state is reached. We remind that k and T , appearing in the probability shown in the step (2) of the Metropolis algorithm, refer to the Boltzmann constant and to the system temperature, respectively. In addition, the term 'local' ∆E, used in the step (1), indicates that the difference in energy is computed considering only the site i and its nearest neighbors. Thus, in principle, some flips may increase the global energy of the whole system. In general, the process simulated by the Metropolis algorithm takes into account the fact that the ferromagnetic interactions J are quenched, i.e. the thermalization is fast enough to allow to consider the interactions as constant. In the opposite case, i.e. with non-constant interactions, we have different scenarios. For instance, a spin system can become glassy by introducing anti-ferromagnetic interactions (i.e. J = −1), or can undergo a dilution process by removing interactions (i.e. setting J = 0). In this work, we focus on dilution of ferromagnets introducing a strategy, based on the Erdős-Renyi model [22], for modeling this process. It is worth to recall that previous investigations (e.g. [23-26]) highlighted the critical behavior of diluted ferromagnets, including for example the ergodicity breaking and the vanishing of a giant component. So, beyond providing a novel method for dilution, we give also a description of some statistical properties of the resulting system, of the dynamical processes living on it, and on potential applications. To this end, the analyses are performed in two different conditions: for introducing the dilution strategy and studying some properties of the ferromagnets, the spin variables (i.e. σ) are considered as quenched, while for studying thermalization processes after a dilution, the quenched variables are the interactions J. Finally, the proposed strategy and the related analyses are performed by means of numerical simulations. Beyond describing the behavior of our model, we emphasize that the achieved results allow also to envision potential applications in the area of complex networks. The reminder of the paper is organized as follows: Section II introduces the proposed strategy. Section III shows results of numerical simulations. Eventually, Section IV provides a description of the main findings.
II. MODELING DILUTION ON FERROMAGNETS
Let us consider ferromagnets of dimension D ≥ 4, modeled via the Curie-Weiss (CW hereinafter) model. The latter is composed of N sites, with a position i and a spin σ ± 1.
Here, the interactions are not limited to the nearest neighbors (like in the Ising model), but are extended to all the system, i.e. every site interacts with all the others. Accordingly, the Hamiltonian of the CW model reads anti-ferromagnetic), 0 (i.e. removal). Thus, a Metropolis-like algorithm devised for flipping interactions may, in principle, generate a spin glass [27][28][29] (flipping J from +1 to −1), and perform a dilution (flipping J from +1 to 0). In addition, both processes (i.e. from +1 to −1, and to 0) can be combined, modeling the emergence of a diluted spin glass. Hence, focusing on dilution, from now on, we consider only the case J = +1 → J = 0. In doing so, starting with a random distribution of spins, a Metropolis-like algorithm (M-L hereinafter) can be defined as follows: 1. Randomly select an interaction J between two sites, and compute the local ∆E asso- As in the thermalization processes, the M-L strategy depends on the Hamiltonian of the system. Furthermore, one might consider also flipping of J from 0 to +1, i.e. modeling a kind of (edge) re-population. However, since the addition of interactions between inverse spins would increase the Hamiltonian, the actual realization of flipping 0 → +1 would be quite rare.
B. Dilution via a Random Graphs-based Strategy
As mentioned above, modern network theory and its methods are spreading in many other scientific fields. Then, it is interesting to see whether and how network theory can be useful for facing the problem of diluting ferromagnets. Notably, our work, beyond to introduce a further method for this task, allows also to prove the effectiveness of network theory in a further application. As result, the proposed model has a double valence, i.e. the process of ferromagnet dilution can be analyzed by the tools developed in network theory, and allows to envision new applications. For instance, as shown later, the subfield of community analysis can benefit from the proposed strategy. Given this premise, we can now proceed with a brief the description of ferromagnets with the formal language of graph theory. In general, a graph G is an entity composed of two sets: N (i.e. nodes) and L (i.e. edges). As above reported, the maximum number of edges (i.e. L M ) depends on N . In addition, the edges can be provided with some properties, as a direction, a weight, and so on, in order to represent specific characteristics of the object their refer to (e.g. a ferromagnet, or a real network as a social network [30], a biological network [31], a immune network [32,33], a financial network [34], and many others). In the proposed model, edges have no particular properties (i.e. they are indirect and unweighted), and the graph is implemented via the E-R model. The latter is realized by defining a number of nodes N and a parameter β, which represents the probability of each edge to exist. Thus, the expected number of edges in an E-R graph is equal to E(L) = L M · β. Notably, decreasing (increasing) β entails to remove (add) edges in the graph. The algorithm for generating an E-R graph is very simple: 1. Define the number of N of nodes and the probability β lows: i ) the ER-L strategy starts with non-connected nodes then populates the graphs with new edges, while the M-L strategy starts with a complete graph and then removes the edges; ii ) the ER-L strategy allows to obtain more configurations than the M-L strategy, being the latter 'Hamiltonian-dependent'. In particular, once the Hamiltonian has been optimized, further actions (i.e. edge removal) have very low probability. On the other hand, the ER-L strategy, being (partially) 'Hamiltonian-independent', allows the realization of ferromagnets with higher degree of dilution. For this reason, M-L is closer to a physical realization of a dilution than ER-L.
ER-L Strategy
We are now ready to present the ER-L strategy in detail. Firstly, the ER-L uses a parameter γ ∈ [0.0, 1.0], representing a kind of control in the dilution process. Notably, γ = 0.0 entails the process is not controlled, while γ = 1.0 entails a fully controlled process.
It is worth noting that, while the dilution of a ferromagnet does not require any control, being driven towards the optimization of a Hamiltonian, using a probabilistic model (i.e. the ER-L), whose dynamics depends only in part on the local energy, the so-called control parameter γ becomes fundamental for approaching the behavior of a physical dilution. Therefore, γ compensates for the partial energy independence of the proposed strategy. Accordingly, the edge probability β is 'corrected' as follows with F s step function and ω equal to In doing so, β = 0 when ω has a null or a negative value and, at the same time, the normalization condition (i.e. ω ≤ 1) is respected for any value of β, and of σ. Thus, varying the parameter β , we can study the Hamiltonian of the resulting diluted ferromagnet and its behavior. In few words, the parameter γ makes the proposed method closer to a physical dilution, since combines β with the contribution of the two spins involved in the interaction.
For instance, from a physical point of view, an interaction between two opposite spins must be removed with a probability higher than an interaction between two equal spins. At the same time, the resulting parameter ω can take values smaller than zero, hence it cannot be directly adopted as the probability to remove an edge. As result, we introduced a 'corrected' probability β , that takes as input any possible value of ω and has a range limited between zero and one. The degree of freedom offered by the parameter γ allows to represent dilution processes both in physical systems, as one can do also with a more classical approach (e.g. that before described), and to consider other systems, as social networks, where further properties and mechanisms can be involved in the process. In particular, in the case of social networks, dynamical processes like dilution might consider both the node similarity (e.g. the spin) and a probabilistic process mapped to the β parameter. Before illustrating the results of numerical simulations, it is important to elucidate a further aspect of our investigation. As previously reported, when studying the equilibrium configuration of a spin system, the interaction variables J are considered quenched. So, with the aim to analyze the behavior of diluted ferromagnets, the variation of J must be faster than that of σ, i.e. the latter is quenched. Now, having defined the ER-L model, we discuss how it can be used. realizations of G with the same amount of edges (i.e. higher its entropy, see also [35] for further details). So in a continuous dilution, beyond considering the effect of γ, one can be able to move from a state, say G(β t1 ), to a state G(β t2 ) without losing information about the edges existing at t1. For instance, if β t1 = 0.9 and β t2 = 0.8, the ER-L must account for the removal of a density of edges equal to 0.1, preserving the remaining structure of the graph.
Therefore, the simple generation of a first graph with β = 0.9, and then with β = 0.8, is not allowed because the two resulting graphs are not correlated. Thus, in the considered example, the continuous dilution process entails to move in the phase space of the graph by removing each edge with a probability much smaller than β t2 , in order to consider also the effect of β t1 . To generalize, given β t1 and β t2 , with β t1 > β t2 , if an edge e ij (i.e. connecting sites i and j) belonging to the graph in the state G t1 has to be confirmed in the state G t2 , one cannot simply use β t2 because, after the process, the edge e ij would be present with probability P (e ij ) = β t1 · β t2 , that is obviously smaller than β t2 . For this reason, we need to compute the factor such that, P (e ij ) = β t1 · = β t2 . In this way, at t2, each edge remains in the graph with probability = β t2 β t1 . In a similar fashion, we implement the inverse process, i.e. repopulating the graph with missing edges, from the state G tn to G tn+1 , now having β tn < β tn+1 . In particular, a new edge (always defined e i,j ) must be added to G with a probability P (e i,j ) = , coming from the following relation: (1 − β tn ) · (1 − ) = (1 − β tn+1 ), so that = 1 − βtn β tn+1 . Summarizing, while a diluted ferromagnet can be realized with a single instance of the ER-L model, a continuous dilution can be implemented as follows: 1. Generate a graph G(N, β 0 ) and define the sampling rate for the dilution, i.e. ∆β; 2. While β t > θ:
3.
Remove each edge in G with probability = βt The parameter θ represents the final edge probability β, i.e. the probability one should use for generating via the E-R model a graph similar to that resulting from the dilution process. β 0 corresponds to the starting value of β for generating the initial graph, and β t corresponds to the value of β at step t. The inverse process, i.e. the graph re-population, can be summarized as follows: 1. Generate a graph G(N, β 0 ) and define the sampling rate for the re-population, i.e. ∆β; 2. While β t < ζ:
3.
Add each new potential edge in G with probability = 1 − βt βt+∆β 4. β t = β t + ∆β As θ, ζ represents the final value of β one should use for generating a similar graph (i.e. with same statistical properties), achieved after re-population, using the E-R model. Moreover, we clarify that 'new potential edge' refers to the edges that can be added to the graph for making it again fully-connected, i.e. it refers only to missing edges. Eventually, we analyze also thermalization processes (considering, after each dilution, the variables J as quenched ).
To this end, the system magnetization defined as offers a macroscopic view on the process. Notably, we recall that the magnetization is an order-parameter and allows both to observe the emergence of a phase transition, and to evaluate its nature (e.g. first order). In addition, it is worth emphasizing that quenched spins, randomly initialized with a uniform distribution, entail the magnetization is on average always null (i.e. the system remains in a disordered phase). Further analyses devised for studying the behavior of our model are introduced in the following section.
III. RESULTS
The proposed model is studied by means of numerical simulations, considering ferromagnets composed of N = 1000 sites. In particular, we aim to obtain diluted ferromagnets with single realizations of the ER-L strategy, and to use the latter for modeling continuous dilution and re-population processes. In addition, we analyze thermalization processes on the resulting diluted ferromagnets and, eventually, we present a potential application in the field of complex networks, i.e. in the evaluation of community stability [36,37].
Dilution via the ER-L Model
We start considering different realizations of ferromagnets via the ER-L model, on varying β and γ. Figure 2 shows the (absolute value of) Hamiltonian H, normalized over the actual number of edges L a , which reads with s denoting a specific spin configuration. It is important to emphasize that eq. 6 is normalized in order to consider only those connections that survive during the dilution process. As expected, the Hamiltonian (eq. 6) is equal to zero when there is no control in the dilution process, since interactions are removed without considering the spin of related nodes. On the contrary, increasing γ, we observe that the Hamiltonian increases up to 1 -we remind that we are considering the absolute value of the Hamiltonian, so that its actual value is −1. For γ > 0.0, the maximum of |H| can be reached spanning β within well defined ranges. Notably, the latter enlarges by increasing γ. For instance, when γ = 0.5 the optimal H is obtained with 0 ≤ β ≤ 0.3, while when γ = 1.0 the optimal H is obtained with setting β = 0.5, while in the case γ = 1.0 the plot illustrates a graph achieved with β > 0.5 and one with β = 0.5. Remarkably, for β ≤ 0.5, the resulting graph appears perfectly divided between the two communities (i.e. spins +1 separated from spins −1). Instead, for values of β slightly higher than 0.5, as represented in Fig. 3, the two communities are connected by few edges. This observation is very important, because strongly related to thermalization processes (i.e. when the variables J are taken as quenched after the dilution step). Numerical simulations, shown in fig. 4, demonstrate that the ER-L strategy is able to dilute and to repopulate a graph, no matter the value of γ. In addition, implementing the two processes as a cycle, we did not find any form of hysteresis, i.e. dilution and repopulation cover two perfectly overlapping paths in the plot of fig. 4. Only in the case with γ = 0.0, we found an observable difference between the two paths, which can still be considered negligible.
Now, we study thermalization processes on ferromagnets diluted with different γ. To this end, ferromagnets can be diluted both implementing single realizations of the ER-L strategy (as we did here), and by performing the continuous dilution, i.e. considering the resulting graph obtained at each step. Moreover, we remind that thermalization is analyzed by studying the average magnetization (i.e. eq. 5) of the system -see Fig. 5. In all cases, it seems that the order-disorder phase transition occurring in the ferromagnet is of first order, no matter the value of γ. At the same time, the latter strongly affects the critical β c . In particular, for γ = 0.0 we found β c ∼ 10 −3 , while for γ = 1.0 the value is smaller than 0.5 + 10 −4 . When γ = 0, the transition is caused by the relevant reduction of edges, so that without interactions the thermalization cannot take place. On the contrary, increasing γ, the order-disorder phase transition is caused initially by a combined effect of edge reduction and community separation, until γ = 1.0, where the disordered phase is reached for the emergence of two well separated, and ordered, communities having opposite spin (i.e. one with σ = +1 and one with σ = −1) -see the inset of fig. 5. In addition, we found that with β = 0.5 + 10 −5 and γ = 1.0, the average (absolute) value of the magnetization and the variance are equal, proving its role as the critical β c . Then, it is worth to further clarify an aspect shown in fig. 5, i.e. the first order phase transition. Notably, when the dilution is strongly controlled (i.e. γ → 1) the first edges to be removed are those linking nodes with opposite spin. So, that once the half of the edges is removed, only those connecting nodes with same spins survive, leading towards a total magnetization equal to zero (i.e. summing the magnetization observed in the two separated communities, which in turn reach opposite states of full order). Instead, for poorly controlled dilution, edges are removed without to consider the value of related spins, so that the transition occurs for lower values of β. To conclude, we remind that these simulations have been performed on ferromagnets containing an equal amount of positive and negative spins.
A. Community Stability
The proposed strategy aims to perform dilution processes on ferromagnets using, as reference, a well-established random graph model (i.e. the E-R model). The latter is widely used in the modern theory of networks for studying dynamical processes, and structural properties of complex networks. Now, we want to evaluate if a modification of the E-R model that we introduced, i.e. the ER-L strategy, can be useful for extracting information from a complex network. In particular, we envision a potential application in the task of measuring the stability of a community, i.e. if according to the properties of its nodes, it risks to disappear after a while. Notably, in a number of models studied in social dynamics [38][39][40][41], often properties and behaviors are mapped onto binary spins [42]. So, in principle, one could use the Hamiltonian defined in eq. 6 for measuring the stability of a community [43], i.e. the higher its |H| the higher the probability that the community survives over time.
In particular, the value of |H| reflects the degree of similarity between the nodes connected in the same community. Even if only the analysis of real datasets would allow to confirm the validity of this hypothesis, and then also the usefulness in the area of complex networks of the proposed strategy, our assumption is based on the simple observation that groups of individuals are more likely to cluster together when share common interests, opinions, and so on. Moreover, beyond observations of real scenarios, this mechanism is confirmed by the positive assortativity [44] that social networks show, i.e. individuals are more likely to interact with their likes. In addition, recalling that the value of eq. 6 can be related to single communities, it can be viewed as an alternative form of assortativity at community level, since the higher its value the higher the fraction of connections between similar nodes.
So, since the case with binary spins has been previously studied, even if referred to dilution processes, here we focus on two main analyses. First we study the influence of heterogeneity in a complete community, measuring how the Hamiltonian decreases while increasing the amount of nodes with different spins. Second, we study the Hamiltonian of a graph, considering the XY model [45] as reference. In doing so, we are able to represent situations where there are more than 2 opinions (e.g. [46,47]), states, or behaviors. The first investigation considers, at the beginning, the combination named s + , then some spins flip increasing the density of nodes with spin −1 (this process leads to the same results also considering the inverse case, i.e. starting with s − and then flipping spins to +1). Results are illustrated in fig. 6. As expected, the minimum of |H| is reached when the number of +1 spins is equal to that of −1 spins. Finally, we analyzed the Hamiltonian of a community using the XY model. Figure 7 reports the related results, for different γ, and considering both 4 different states, and 360 different states. Here, the pairs of spins are evaluated according to the cosine similarity cos(θ a − θ b ), with θ a and θ b representing the value of the involved spins.
We observe two main differences from the classical binary spins. In particular, using the XY model, the decrease of the Hamiltonian is smoother in the XY model than in the Ising model, where it appears less monotonous. Furthermore, the maximum value of |H| is always smaller than 1.0. Accordingly, we note that communities are more stable (or robust to spin flipping) when there are more than 2 states characterizing the related nodes. On the other hand, many possible states do not allow to reach a perfect stability (i.e. |H| = 1), exposing a community to a higher risk to disappear.
IV. DISCUSSION AND CONCLUSION
This work introduces a strategy, named ER-L, for modeling the dilution of ferromagnets using the framework of modern network theory. In particular, we adopt as reference the E-R graph model, since the latter, under opportune conditions, constitutes the graphical representation of the Curie-Weiss model. The proposed method is partially Hamiltonianindependent, i.e. while a Metropolis-like strategy can dilute a ferromagnet according to energy-based rules, the ER-L strategy depends on a probabilistic (non-physical) parameter β and, only in part, on the local energy via a parameter γ, which represents a kind of control in the dilution. Notably, diluting a ferromagnet can be thought as pruning a graph G, moving the latter in a phase space composed of all its possible realizations. The amount of edges (i.e. interactions) depends on the parameter β of the ER-L model, so our strategy moves G along the axis β. In doing so, G undergoes a kind of phase transition (see also [48][49][50][51]), where different structures can be obtained, from sparse nodes to a complete graph. In addition, the parameter γ ensures that the motion along the β axis, on the phase space, corresponds to that followed by a ferromagnet during a spontaneous dilution. In particular, like during thermalization, a system tends to naturally reach an equilibrium state that minimizes its energy. In a similar fashion, spontaneous dilutions should lead the system towards a ground state. Results indicate that ER-L is able to perform this task for different values of γ, depending on the considered β. In addition, we also analyzed the closed path (i.e. the cycle) from a complete graph to single nodes, and then to a complete graph by repopulating with new edges the diluted graph. It is worth to highlight that ER-L allows to dilute a graph also after its Hamiltonian has been minimized, while a Metropolis-like strategy, being 'Hamiltonian-dependent', would not be able. Therefore, not all the structures obtained via ER-L have a physical meaning. However, during a continuous dilution, we can discriminate those that appear without a physical meaning, computing the difference of the Hamiltonian between the two structures. The related analyses have been performed considering the spin variables σ as quenched. So, we studied also the opposite case, i.e. after a dilution the interactions J become quenched, and the spins can flip towards an equilibrium state (see also [52][53][54]). In order to study this process, and to make a relation with the parameter β, we analyze the average magnetization achieved at equilibrium, which provides an indication about the phase transitions occurring in the system -see fig. 5. Once analyzed the outcomes of the ER-L strategy, we performed a further analysis for evaluating the opportunity to apply it to further tasks, in particular considering the measure of stability of communities in complex networks. First we analyzed the variation of the Hamiltonian turning an ordered system to a disordered one. Then, we studied the Hamiltonian considering as reference the XY model, i.e. admitting spins with more than 2 values. This preliminary investigation suggests that ER-L may, in principle, be useful for evaluating the risk that a community will dissolve after a while, according to the degree of heterogeneity of its individuals (e.g. in terms of opinions, interests, and so on). In addition, we found that communities whose nodes have more possible states (e.g. opinions), never reach a perfect stability (i.e. |H| < 1) but can be more robust than those with binary spins to the emergence of interaction between different individuals. Obviously, we are not taking into account all the 'social' processes that may occur in real systems, e.g. once two different individuals interact, one might imitate the other, relaxing the system. Moreover, considering the two strategies here described, i.e.
ER-L and M-L, we deem important to mention that, in principle, they might constitute also the base for developing learning algorithms [55]. Notably, almost all simulations have been carried on considering an equal distribution of positive and negative spins, however different combinations (i.e. patterns) might be used. Therefore, the optimization of the Hamiltonian during the dilution, in our view, even if referred to only one pattern, can be actually interpreted as a form of learning in a neural network [56]. On the other hand, further investigations are required for evaluating whether the proposed model may allow the graph to learn and store more than one pattern. Finally, we remark that, in order to assess the actual usefulness of ER-L for evaluating the community stability, further investigations based on real datasets are definitely mandatory.
The authors declare that there is no conflict of interest regarding the publication of this paper.
ACKNOWLEDGMENTS
MAJ would like to acknowledge support by the H2020-645141 WiMUST project, and to thank the mobility funds of the Faculty of Psychology and Educational Sciences of Ghent University. Authors wish to thank Adriano Barra for the priceless suggestions. | 7,134.4 | 2017-02-14T00:00:00.000 | [
"Computer Science"
] |
METHOD OF SOLUTION OF COMPLEX OPTIMIZATION PROBLEM FOR FORMATION OF COMPONENT COLUMN OF TECHNIQUE AND ROUTE SELECTION OF ITS MOVEMENT BY NON-STATIONARY ROAD NETWORK
Context. Effective solution of a number of application problems related to transportation, as a rule, depends on the solution of two problems: the correct formation of the composition of the column of technique and the successful choice of the route of its movement. Each of the problems is optimization, the methods of solving which are currently being worked out. Theoretical studies of each of the individual problems and their practical applications indicate their interdependence, which has not yet been fully studied. Practical applications necessitate the development of a suitable scientific and methodological apparatus. Objective. The purpose of this work is development of a method for solving a complex optimization problem of forming a column of vehicles and choosing the route of its movement on a non-stationary road network. Method. The mathematical model of solving the optimization problem of complex formation of the composition of the column of machinery and the choice of its route of motion is proposed. A heterogeneous set was used to describe the array from which the vehicles were selected. A graph was used to describe the road network. As a criterion for the optimality of the complex problem is the minimization of time spent on moving. The peculiarity of the model is to take into account the possibility of dynamically changing the time weights of edges of the graph when implementing the movement of a column of machinery along the chosen route. Based on the use of this model, a method is proposed, which provides a comprehensive choice of the composition of the column of equipment and optimal routes of its movement on a non-stationary road network. Results. The article proposes an algorithm that provides the solution of the optimization problem of complex formation of the composition of machinery column and the choice of its route of motion in terms of time-fixed edges that describe the network of roads. The features of application of the proposed algorithm are given. Using the developed software, the choice of technique from an existing inhomogeneous array and the choice of a route on a graph with a non-stationary time weight of edges was investigated. The example shows the imperfection of decisions regarding the complex formation of the column composition and the choice of its optimal route of travel on a non-stationary network of roads obtained using classical methods. Conclusions. Not taking into account the impact of a possible change in traffic conditions, as evidenced by a change in the time weights of the edges of the graph describing the road network, on the composition of the column of machinery can lead to suboptimality of the obtained solutions using classical methods of forming the composition of the column and finding the shortest route in the graph. The method proposed in this study can be used to obtain the optimum composition of the column and the route, taking into account the change in road conditions during the movement of the column. The obtained results extend the possibilities of the theory of discrete optimization and the theory of graphs.
NOMENCLATURE i x -symbol of a specific vehicle, ____ , 1 n i ; 1 U -conditional shortening of initial conditions concerning available vehicles from among which the structure of a column of machinery for carrying out transportations of personnel and freights can be formed; 2 U -conditional shortening of the initial conditions regarding the tactical and technical characteristics of vehicles from the composition of the column; 3 U -conditional shortening of initial conditions in relation to the studied graph of the road network; 1 K -conditional shortening of the criterion on the level of technical readiness of equipment from the composition of the column; 2 K -conditional shortening of the criterion on the number of vehicles in the column; 3 K -conditional shortening of the criterion on the number of brands of vehicles in the column; 4 K -conditional shortening of the criterion for the duration of the march; 1 O -conditional shortening of restrictions on the level of readiness of each vehicle at the stage of formation of the column; 2 O -conditional shortening of restrictions on the carrying capacity of vehicles; 3 O -conditional shortening of restrictions on the volume of vehicle bodies; 4 O -conditional shortening of restrictions on the passenger capacity of vehicles; 5 O -conditional shortening of restrictions on the consumption of diesel fuel by vehicles from the composition of the column; 6 O -conditional abbreviation of the restrictions on fuel consumption of one brand by vehicles from a composition of a column; 7 O -conditional shortening of restrictions on fuel consumption of the second mark by vehicles from the composition of the column; 8 O -conditional shortening of restrictions on fuel consumption of the third mark by vehicles from the composition of the column; 9 O -conditional abbreviated designation of restrictions on the power reserve on the motor resource; 10 O -conditional abbreviated notation of restrictions on the duration of the march; 11 O -conditional shortening of restrictions on the level of readiness of vehicles when they arrive at their destination; 12 O -conditional shortening of restrictions on the number of vehicles in the column; 13 O -conditional shortening of restrictions on the number of makes of vehicles in the column;
INTRODUCTION
Today the issue of optimization of transport are extremely important in different fields of human activity, in particular, in solving various tasks of logistics. Successful implementation of many transportations depends significantly on the timely arrival of the convoy of vehicles to a specific destination. For the effective transportation of various cargoes by land, various modern vehicle with many features. Prior transportation planning and possible optimization of the composition of the machinery column with a wide range of factors. In the next step required is the solution of the problem of determining the optimal route of the technique column. There is quite an extensive network of roads leads to a significant number of possible routes that combine the place of its disposal destination. This multivariate is observed even at small distances, which need to be overcome. On the choice of the optimal route may essentially influence the dynamics of development of traffic situation. Due to the influence of predicted and stochastic factors, the velocity of the column in separate sections of the route varies significantly. Incorrect consideration of a possible change in road conditions can lead to incorrect formation of the structure of the column of vehicles and the choice of route. And this can ensure the timeliness of the arrival of the column at the destination. Consequently can occur the failure of certain tasks. Therefore, the task of organizing the march (complex solution of component problems) is relevant, and the presence of variability, a large number of factors that should be considered when dealing with it, their complex interactions and influence the result, causes considerable computational complexity of the problem and the need for powerful computational tools and the development of appropriate information technology to solve the problem.
The object of the research is to form the composition of the machinery column and to choose the route of its movement.
The subject of the research is the scientific and methodological apparatus of complex optimization of the composition of the column and the choice of the route of its movement.
The purpose of this work is to develop a method for solving a complex optimization problem of forming a machinery column and choosing the route of its movement on a non-stationary road network. U . Also specify a network of roads that connects the departure point (point A) to the destination (point B). The mathematical model of the road network is a marked graph G , whose edge weight is the length of the respective sections of the road, 3 U .
PROBLEM STATEMENT
It is necessary to arrange transportation from point A to point B so that: -the duration of the march was minimal, ( 4 K ); -coefficient of readiness of each vehicle is not less than the permitted level, ( 1 O ); -the total capacity of vehicles from the warehouse of the column allowed to carry goods, ( 2 O ); -the total volume of the bodywork of vehicles from the warehouse allowed to carry the goods, ( 3 O ); -the total passenger capacity allowed to carry personnel, ( 4 O ); -the total fuel consumption of vehicles from the convoy did not exceed the amount of fuel available to march by fuel type, ( ); -the stock of motorsource was not less than the distance of transportation, ( 9 O ); -vehicles arrived at point B with readiness not less than specified, ( 11 O ); -the number of vehicles in the column was not greater than the specified value, ( 12 O ); -the number of vehicle brands in the column was not greater than the specified value, ( 13 O ). However, it should be taken into consideration that during the movement of the column, the motion time along the individual edges can be variable. This condition is determined by the influence on the time of movement along a single edge of different conditions, such as climatic (rain, ice, fog, etc.), man-made (blockage of the roadway, its damage due to flooding of the terrain, etc.), changes in the period of day (day, night), etc.
It should also be taken into consideration that the calculation of the motion time along individual edges can be carried out: 1) at the moments when the column is at a certain vertex of the graph, and the calculation is made at those moments. This is a case where the decision on the further route of traffic is made at the points of branching of roads taking into account the situation due to the condition of individual sections, which changes dynamically and the data on which appear periodically; 2) at the times when the column is at a certain vertex of the graph, and for these moments the initial data on the speed of movement that will take place when the column enters the vertex are known in advance. This is a case where a route decision can be made at the beginning of the traffic, taking into account the well-known situation regarding the state of the roads, which will change dynamically, but the data on which can be taken into account in advance.
REVIEW OF THE LITERATURE
The issue of forming the composition of a column of vehicles for the efficient movement of goods has been given attention in a number of works, in particular [1][2][3][4].
Thus, in [2] the method of tactical calculations for determining the number of vehicles for transportation of goods took into account the characteristics of cargo, load capacity and speed of movement of vehicles, range of movement, loading time, unloading, refueling, resting drivers between trips (if applicable), as well as the timing of the movement of goods.
The work [3] reflects the issues of predicting the effectiveness of the march of military formation by the reliability of weapons and military equipment, as well as the impact on the march effectiveness of the number of repair units, the technical state of the equipment in terms of reliability, the level of efficiency of repair bodies in carrying out repairs and labor costs restoration of weapons and military equipment.
In [4], a variant of the cargo transportation model for finding the optimal route of transportation of goods from one sender to several consumers is presented in the transport network.
However, in the analyzed works [2][3][4] the requirements for the formation of the optimum composition of the vehicle column, such as the level of readiness, the stock of motorsource, the number of stamps and samples, the availability of fuel for refueling, etc., were ignored. These requirements were reflected in the author's work [1].
An approach to choosing the route based on "edgelabels" is given in work [5]. Its use makes it possible to speed up the search for the shortest path by 500 times compared to the standard Dijkstra algorithm over a large graph. In work [6], an algorithm for selecting optimal routes in a multimodal mode of a public transport network is presented. Based on the results of this study, the approach to routing of transit hubs has been adapted to plan public transportation. In the scientific work [7], the method of contraction hierarchy was used to find the shortest path. In the study [8], based on the application of the SHARC algorithm, the possibilities of finding the shortest paths for arbitrary means of transportation in the continental scale transport network were presented. The problem of multimodal route planning has been investigated in a scientific paper [9]. In the research [10] a model for estimating traffic delays of vehicles was presented, taking into account arbitrary loads in the process of movement. The study [11] provides route planning for military ground vehicles on the battlefield. An algorithm [12] developed an algorithm for solving the problem of finding the shortest paths in urban public transit routes, taking into account the duration of transplants using the branch and boundary method. In works [13][14], issues of the application of geoinformation technologies in solving logistics tasks in military affairs based on the application of modern ArcGIS information systems [15][16][17] are explored.
In the author's work [18], the problem of choosing the optimal route of movement of the machinery column of the border commandant rapid response was taken into account, taking into consideration the peculiarities associated with the pre-installation and ensuring the reliability of the initial data based on the use of spline functions [19][20][21]; mathematical models of the studied problem for three cases (discrete-stochastic, discretely-deterministic and continuous-indefinite) are constructed, which depend on the peculiarities of realization of column motion; Algorithms for choosing the optimal route of movement of the column of border command rapid response command for each possible case are proposed.
However, despite the sufficient attention that was paid to the authors, including the tasks of forming the optimal composition of the column of technique and choosing the route of its movement, the task of organizing a march that organically combines both one and the other of these tasks is not yet completely investigated. This is explained by the imprecision of approaches to solving this problem.
In view of this, the purpose of this work is to develop a method for solving a complex optimization problem of forming the composition of a column of machinery and choosing the route of its movement on a non-stationary road network.
MATERIALS AND METHODS
At the physical level, the formulated task of organizing the march consists in the complex solution of two interrelated problems: problem 1 -selection of the appropriate composition of the machinery column; problem 2choosing the appropriate route of its movement.
It should be noted that each of problems 1, 2 are solved separately from each other. The corresponding solutions are given in [1,18].
The problem 1 is solved as a single-criterion optimization problem of the form: Initial data Criterion min , , The system of restrictions: In problems (1)-(4), single criterion was obtained by the functional combination of three separate criteria Criterion Problems (5)-(6) take into account the variability of the edges of the road network graph, as well as the format of such change -how it occurs, at what moments, at which stage, the dynamic matrixes of the edges are known.
The content of the initial conditions and the criterion of problem 2 are described in detail in work [18].
The result of the solution of problem 2 is the path of motion of the column ;...; ; 2 1 2 -the set of vertices through which the path of movement must pass. Herewith, B v s . The mathematical model of the problem under study in the following notations can be represented as an optimization problem of the following form: Initial data Criterion .
The system of restrictions , ,..., 9 1 O O ( 9 ) r o In Problems (7) . To solve the problem under study (7)-(14), we consider that the speed of movement of the column along the road network is a key factor that determines the time required to overcome the distance between the start of the route and the destination. Consider a possible approach to its definition. When traveling on the road, this speed is affected by a number of circumstances.
The first factor is the technical speed limitations of the vehicles that are part of the column. The speed of movement of the column may not exceed the minimum of the maximum permissible speeds of movement of the respective vehicles. The approach to determining this velocity can be estimated from [18]. Considering this factor in the model does not cause much difficulty. Let us denote it as kol v min . The second factor is the speed limits of the road section. For each section of roads of all edges of the graph G the speed limits are determined due to the features of the section (its shape, type of pavement, angle of inclination, etc.). Of course, these restrictions are determined by the appropriate restrictive signs. Limitations of this type can be set as a function of the fraction of distance for each edge of the road graph. In the general case, this limitation may additionally depend on the time (the nature of the coverage may be changed, road works performed, additional restrictive signs may be established), so its consideration is the most difficult. To simplify, until we consider such a time dependence and denote this restriction ) (x v ij , where: i, j -are the vertices of the graph joined by an edge, x -is the coordinate that determines the position on that edge (varies from 0 to L ij -the length of the corresponding edge). The third factor is the global impact of weather, or time of day. Adverse weather conditions (heavy rain, snow, fog) or dark time can significantly limit the speed of the column. In particular, such an effect can be significant when combined with low temperatures (ice may occur). Of course, weather influences can also have a local character, then in addition to the time, the location of the column should be additionally taken into account. However, for the sake of simplicity, we will now only consider global weather effects that will only depend on the weather. Let us denote this factor ) (t v pog .
Given these factors, the speed of the column can be represented by the expression: Given the speed (15), you can calculate the passage time of each of the edges that make up the route: The complexity of (16) is due to the necessity of taking into account in the integrand the dependence of t(x). This dependence can be taken into account in the calculation (16) in the numerical method as a constant component of T i (the time of beginning of the passage of the edge (i, j)), to which is added the accumulated fraction of integration (16): The residual travel time on the route is the sum of the time intervals (16) that are spent moving all the edges of the route The approach taken to determine the speed of the column at any given time and time of the column allows you to go directly to the description of the method of solving the complex optimization problem of the formation of the composition of the column and the choice of the route of its movement under conditions of dynamic change of the situation and taking into account the restrictions of the speed mode.
The method algorithm is proposed as follows.
In the first stage, a set of corteges should be formed that describes all possible routes that connect the starting point to the destination in the road graph (Fig. 1). Thus, to reduce the power of this set and, accordingly, the computational complexity of the problem, options are considered without repetitive passage of nodes. The construction of multiple routes without repeated passage of nodes is carried out on the basis of the modified Dijkstra algorithm described in research [18]. Subsequently, for each of these corteges (routes), a search is made of all possible options for constructing a column.
In such a search, the system of restrictions (9)-(12) is checked. Those columns that do not have this constraint system are not considered.
It should be noted that the implementation of individual restrictions is affected by the length of the route under consideration. This is one of the factors that link the two parts of the complex optimization problem under study.
For each variant of construction of the column, the resulting maximum speed of its motion is determined by the formula (15), as the speed of the vehicle with the worst speed capabilities that is included in its column.
Taking into account this speed and other factors, formulas (16)-(18) determine the time required to overcome the route by the investigated composition of the column. According to criterion (8) of minimum time, the best combination of route and column composition is chosen, which is the desired solution.
EXPERIMENTS
For the purpose of algorithmic-software implementation of the method described above, appropriate software was developed. The XML format (Fig. 2-3) was used for the convenience of describing the output data (vehicle fleet and road network). The experimental studies considered the network of roads described by the graph shown in Fig. 4. This graph shows the weight of the edges that describes the lengths of the corresponding road fragments. Fig. 4 shows the location from which the column (node A) and destination (node H) leave. Using the modified Dijkstra algorithm to find the shortest route allows you to determine the optimum distance for the ABEH route with a total length of 251.7 km.
However, when traveling on a network of roads at different speeds, the shortest route will not always provide the minimum amount of time spent traveling.
RESULTS
We will evaluate the results of determining the optimal composition of the column in combination with optimizing the route of its movement from point A to point H. In the first stage, we will determine the essential requirements for the transportation of people and cargo. To accomplish this, a large list of vehicles with significant restrictions on the maximum speed of movement had to be used in the formation of the column (Fig. 5). With such a convoy structure, restrictions on the maximum vehicle speed (50 km/h) make it impossible to realize the high-speed potential of the roads themselves. Therefore, the route obtained is the same as the shortest route obtained using the Dijkstra algorithm. The weakening of the requirements for the carriage of people and the volume of cargo reduces the composition of the column and improves its speed from 50 km/h to 60 km/h (Fig. 6).
Improving the speed of the column leads to an improvement in the optimization score. Travel time decreased from 5.44 h to 4.87 h. However, with such an increase in the average speed of the route (from 46 km/h to 52 km/h), the route itself has not changed.
With further weakening of the traffic requirements as a result of the optimization, the column remains the only vehicle that has the best speed characteristics and which meets the other requirements for the composition of the column (Fig. 7).
With this composition of the column and improving its speed characteristics (up to 70 km/h), there is a further reduction of time to 4.43 hours. However, the most interesting part is the change of route, which becomes different from the shortest. This change is explained by the fact that when moving from node E to node H, although the length of the edge {EN} is smaller than the section {EFH}, but the speed of movement on this edge is less. At the low possible speed of movement of the column, this advantage of the road section {ЕFH} was not realized (Fig. 5-6). However, when it is increased to 70 km/h, this variant of the route leads to an improvement in the time frame for optimization. And despite the increase in the distance of the route from 251.7 km to 272.7 km, by increasing the average speed to 61.6 km/h, the resulting time was reduced to an optimum value of 4.43 h.
Let us now study the impact on the result of solving the complex optimization problem of reducing the speed of movement of the column due to the worsening weather conditions. To this end, the program implemented speed limits from 10 to 12 hours to 20 km/h. The T0 parameter is used to set the start time of the column exit. Consider the impact of this factor in the formation of columns for transportation of 3 people and cargo volume of 3 m3. With such restrictions, as shown above, the column is formed of 1 car. We carry out the solution of the optimization problem when changing the parameter "T0" from 4 to 12 hours (Fig. 8).
According to the calculation results, for the initial time of 4 and 5 hours the results are identical and coincide with the 0 hour variant (Fig. 7). This is explained by the fact that it takes 4.43 hours to move the column and therefore the factor of influence of weather conditions is excluded (the column arrives at destination till 10 o'clock). However, for the initial time of 6 hours, it begins to be partially affected, and the restriction of the speed of the last sections of the route leads to an increase in the time to move almost one hour. Most interesting is the effect of this factor at the departure time of the column at 7 and 8 o'clock. Speed limitation at the final stage of the movement eliminates the benefits of detours at {EFH}, where good speed is ensured. Therefore, the shortest route is optimal. At the departure of the column from 9 to 11 o'clock, the speed limitation is carried out at the initial sections of the route and the route {ABEFH} becomes optimal again. However, different periods of time from 5.09 to 5.8 hours are spent to overcome it. The last optimization option in Figure 8 corresponds to a 12-hour departure time when the weathering effect is complete. Therefore, the result obtained is the same as the variants for T0 values: 0, 4 and 5 hours. Consider also the column option with moderate constraints that determine the formation of a column of two cars. Fig. 9 shows the optimization results for T0 values from 4 to 12 hours. Similar to the previous variant, the initial two cases show no influence of the speed limits (the column manages to pass the route before the negative influence of the weather conditions begins).
It is interesting to change the optimal route for T0 6 and 9 hours. All variants affected by negative conditions are subject to greater travel time values.
DISCUSSION
The method of solving the complex optimization problem of forming a column of machinery and choosing the route of its movement on a non-stationary road network proposed in this study allows to take into account the initial conditions and restrictions on the level of readiness of each vehicle at the stage of forming a column, load capacity, body volume and passenger capacity of vehicles, consumption of different types of fuel by vehicles from the composition of the column, the reserve of the course on motor resources, duration of march, the level of readiness of transport which means at the moment of arrival at the destination, the number of vehicles and their brands a part of the column that best approximates the studied problem to real conditions. However, although the proposed method allows to solve the problem under study, the question of choosing a sampling step in the study of the mathematical model and quantitative assessment of the reliability of the obtained results remain relevant. In order to predict the impact of changes in road conditions on the movement of the column, it is advisable to consider the use of artificial intelligence methods, in particular, neural networks.
CONCLUSIONS
Thus, the task of organizing the march, which provides a complex solution to the problems of forming the composition of the column of machinery and choosing the route of its movement, is set out in the work, as well as its mathematical model is formed.
Taking into account the peculiarities of the model, which are determined by the influence of stochastic factors, the method of complex choice of composition of the column of technique and the optimal route of its movement is proposed. Algorithmic software is also formed, which takes into account a considerable part of the features of the model.
The study looks at an example of using the proposed method for a fragment of the horn network and shows a significant relationship between the two partial problems, which is amplified by the increase in the influence of the hundredhost factor.
The application of the proposed method can allow avoiding unreasonable decisions when organizing movements.
The scientific novelty of the obtained results lies in the formalization of the problem of complex search for the composition of the column of vehicles and the route of its movement, taking into account the dynamics of the change of the road situation, the structure of the corresponding mathematical model and the method of its solving. The results of computational experiments confirmed the interdependence of partial problems of the studied complex problem. The proposed scientific and methodological apparatus expands on the scientific toolkit of the theory of discrete optimization and the theory of graphs.
The practical significance of the obtained results is to increase the efficiency of logistic transportations by optimizing the composition of the column of machinery and routes of its movement. The software-algorithmic implementation of the proposed method allows to expand the functionality of the respective software and hardware complexes.
The prospects for further research are to investigate the effectiveness of the proposed mathematical model. It is also promising to explore the possibility of using artificial intelligence methods to predict the development of road conditions. | 7,077 | 2020-09-08T00:00:00.000 | [
"Computer Science"
] |
The Translocator Protein (TSPO) in Mitochondrial Bioenergetics and Immune Processes
The translocator protein (TSPO) is an outer mitochondrial membrane protein that is widely used as a biomarker of neuroinflammation, being markedly upregulated in activated microglia in a range of brain pathologies. Despite its extensive use as a target in molecular imaging studies, the exact cellular functions of this protein remain in question. The long-held view that TSPO plays a fundamental role in the translocation of cholesterol through the mitochondrial membranes, and thus, steroidogenesis, has been disputed by several groups with the advent of TSPO knockout mouse models. Instead, much evidence is emerging that TSPO plays a fundamental role in cellular bioenergetics and associated mitochondrial functions, also part of a greater role in the innate immune processes of microglia. In this review, we examine the more direct experimental literature surrounding the immunomodulatory effects of TSPO. We also review studies which highlight a more central role for TSPO in mitochondrial processes, from energy metabolism, to the propagation of inflammatory responses through reactive oxygen species (ROS) modulation. In this way, we highlight a paradigm shift in approaches to TSPO functioning.
Introduction
The innate immune response is an emergent phenomenon arising from complex molecular and cellular signalling pathways. In the central nervous system (CNS), the innate immune response is orchestrated by microglial cells, capable of adopting differential functional states in response to stressors. The switch from homeostatic microglia to activated, potentially pro-inflammatory, microglia has become an important indicator of pathology in the CNS [1,2]. This switch is also known as M1/M2 polarization, where classically activated pro-inflammatory M1 microglia and alternatively activated M2 microglia lie on opposite ends of a continuum of functional states [3,4]. The pro-inflammatory M1 polarisation of resident microglia is often referred to as "neuroinflammation" to distinguish it from inflammatory tissue responses with recruitment of peripheral immunocytes. Neuroinflammation has been described across a wide spectrum of diseases, including psychiatric disorders, neurodegeneration and autoimmune diseases [5][6][7][8][9].
The strong metabolic demand of activated, proliferating microglia is reflected in the functional changes of mitochondria [10,11], and an intrinsic link exists between mitochondrial energetics and the mounting of an inflammatory response [12,13]. The pro-inflammatory M1 phenotype of microglia is underpinned by a switch from oxidative phosphorylation to glycolysis and involves shifts in mitochondrial fission and fusion [13,14]. Mitochondrial proteins are also involved in the signalling transduction pathways that activate inflammatory pathways to release pro-inflammatory factors, and also mediate apoptosis. Facilitating these immune responses is mitochondrial reactive oxygen species (ROS) release and oxidative stress signalling [15][16][17][18][19], as well as the NLRP3 inflammasome complex activation [20]. It is, therefore, important to understand the subcellular drivers of microglial reactivity, which will ultimately aid in the detection and targeting of key mediators of these immune processes.
One of the most widely studied biomarkers of microglial activation is the translocator protein 18 kDa (TSPO). TSPO is a transmembrane protein located primarily on the outer mitochondrial membrane [21,22], and was originally discovered as a peripheral binding site to diazepam [23], hence being long referred to as the peripheral benzodiazepine receptor (PBR). The marked upregulation of TSPO in activated microglia in a broad spectrum of neurobiological diseases has made TSPO a prominent target for positron emission tomography (PET) imaging studies [24]. Concomitantly, a large field of enquiry into TSPO binding agents for diagnostics and therapeutics has also emerged. Indeed, even the earliest description of TSPO upregulation in activated microglia was observed through the increased binding of prototypical TSPO ligands 1-(2-chlorophenyl)-N-methyl-N-(1-methylpropyl)-3-isoquinolinecarboxamide (PK11195) and 4'-Chlorodiazepam (Ro5-4864) in injury models [25,26]. Whilst these ligands are still widely used in studies probing TSPO functioning, newer classes of TSPO ligands have also emerged which will be mentioned throughout this review. Due to the prolific research into TSPO binding agents, several reviews have covered the many TSPO molecular imaging agents and their use in imaging studies in a spectrum of disorders [27][28][29], including neurodegenerative disorders [8,[30][31][32][33], psychiatric disorders [34][35][36] and tumours [37].
Despite being widely utilised in imaging studies, the exact functions of TSPO are not well understood. It was long posited that TSPO was primarily involved in the translocation of cholesterol, and thus, steroidogenesis, supported by the strong abundance of TSPO expression in steroidogenic organs [38,39]. This led to the change in nomenclature from "peripheral benzodiazepine receptor" to "translocator protein" [40]. However, sparked by the creation of global TSPO knockout mouse models from independent groups [41][42][43][44], in which mice displayed normal pregnenolone levels, an overall shift in thinking about the functioning of TSPO emerged (several reviews exist which detail the history of TSPO research more comprehensively [45][46][47]). The field of TSPO has since expanded into several different pathways, with many of these lines of enquiry converging on the primary role of TSPO in immunomodulation, cellular bioenergetics, and associated mitochondrial processes. These lines of evidence have been achieved mainly through genetic deletion studies and studies using both older and novel TSPO binding agents. Along with this, several studies are now highlighting the potential neuroprotective properties of TSPO ligands in a broad range of disease or injury models.
In this review, we describe the more recent and growing body of literature surrounding the evolving knowledge of the functions of TSPO. Whilst TSPO has been studied in a broad spectrum of disease states in PET imaging studies, in this review we integrate more direct studies examining the role of TSPO in inflammatory responses, and its more central role in the modulation of mitochondrial energetic processes.
TSPO in Mitochondrial Bioenergetics
After a hypothesis first proposed by Anholt (1986), the earliest studies linking the function of TSPO to mitochondrial energy respiration used compounds targeting TSPO on isolated mitochondria from organs strongly expressing TSPO, such as the kidney, adrenals and liver [48,49]. This was followed by studies in a neuroblastoma cell line, where the high affinity ligands PK11195, and to a lesser extent, Ro5-4864, in co-culture resulted in dose-dependent increases in oxygen consumption compared to non-treated controls [50]. It was later found in a glioma cell line that these TSPO ligands can also increase mitochondrial numbers and mitochondrial division, further linking TSPO to mitochondrial dynamics [51]. However, from the earliest studies, the known strong presence of TSPO expression in steroidogenic organs and the ability of cholesterol to bind with high affinity to TSPO led to several years of study into the specific role of TSPO in cholesterol transport and steroidogenesis [39,[52][53][54]. As this function continues to produce conflicting evidence, a role for TSPO in steroid synthesis cannot be ruled out, though further data in a range of experimental models is required.
In 2014, independent research groups created global and conditional TSPO knockout models which were found to be viable with no observable abnormalities, and importantly, no effect on steroidogenesis [41,43,55]. It was found, however, that microglia from TSPO knockout animals had lower ATP synthesis and oxygen consumption rates than their wildtype counterparts, suggesting that TSPO deficiency leads to alterations in microglial metabolic activity [41]. Extending on these findings, this group also demonstrated that TSPO overexpression in a human T-cell line increases mitochondrial ATP synthesis, compared to wildtype and empty-plasmid control cells, measured with the luciferin-luciferase bioluminescence assay in permeabilised Jurkat cells with intact mitochondria and cell structure. TSPO overexpressed cells also exhibited increased proliferation rates compared to wildtype and empty-plasmid control cells [56], thus establishing that a key role for TSPO may lie within cellular bioenergetics.
In more recent years, this path of inquiry has been pursued by multiple groups, using both rodent and human microglia. In the human microglia C20 cell line engineered for TSPO knockout, pregnenolone levels were unchanged in wildtype and knockout cells, including when exposed to TSPO ligands [57]. In agreement with Banati et al. (2014), basal and maximal respiration, as well as ATP-related oxygen consumption, was lower in TSPO knockout cells compared to wildtype cells. This was accompanied by reduced mitochondrial membrane potential and calcium retention [57]. In another recent study, using the rodent microglial cell line BV2, TSPO knockdown cells had altered mitochondrial membrane potential compared to scrambled controls. Furthermore, scramble BV2 cells treated with Ro5-4864 and PK11195 had increased basal respiration and ATP-related respiration [58], demonstrating the ability of TSPO-specific ligands to alter mitochondrial processes of microglia. A very recent and comprehensive study of the role of TSPO in several key mitochondrial functions has demonstrated similar results. TSPO knockout glioma cells were shown to have more mitochondrial fragmentation, increased levels of mitochondrial fission proteins such as FIS1, and decreased complex I activity compared to wildtype cells. TSPO knockout cells also had lower mitochondrial membrane potential, decreased global ATP production, reduced basal and maximal mitochondrial respiratory capacity and higher ROS levels compared to wildtype cells. These findings were also replicated in TSPO knockdown patient-derived stem-like GBM1B cells, demonstrating analogous effects in human cells [59]. Mechanistically, TSPO has been linked to the F 1 F o -ATP synthase, the mitochondrial enzyme and protein complex responsible for synthesising cellular ATP. PK11195, Ro5-4864 and protoporphyrin IX (PPIX) have been found to modulate the phosphorylation of subunit c of the F 1 F o -ATP synthase in isolated rat brain mitochondria [60]. PK11195 has also been found to inhibit mitophagy in Bcl-2 knockdown HeLa cells by inhibiting activity of the F 1 F o -ATP synthase, in a similar way to oligomycin [61].
TSPO and Redox Mechanisms
An important by-product of the mitochondrial electron transport chain is the production of free radicals. Significant evidence has emerged linking TSPO to the production and modulation of ROS, important to the functioning of pro-inflammatory microglia [19]. In BV2 microglial cells, PK11195, Ro5-4864 and PPIX exposure in culture increased initial ROS production compared to non-exposed controls, and pre-treatment with an antioxidant blocked this effect [62]. In response to a stressor, TSPO transfected Jurkat cells were more resistant to UV-induced apoptosis, due to delayed membrane depolarisation, reduced caspase-3 activity and reduced superoxide generation compared to wildtype cells [63]. More recently, in a model of Alzheimer's disease using amyloid precursor protein (APP) overexpressed neuroblastoma cells, newly synthesised imidazoquinazolinone TSPO ligands were able to reduce oxidative injury by decreasing ROS generation in response to H 2 O 2 stimulation. These ligands could also stabilise mitochondrial respiration under stress conditions, and ultimately reduce levels of amyloid beta formation [64]. These novel ligands have also been found to increase levels of ATP production and stabilise mitochondrial membrane potential unrelated to any effects on steroidogenesis [65]. TSPO ligands have also demonstrated analogous effects in other immune cell types, as PK11195 was shown to reduce basal ROS generation in mouse primary peritoneal macrophages [66], and Ro5-4864 has been shown to preserve mitochondrial membrane potential and reduce ROS production after glucose deprivation stress in astrocytes, compared to controls [67].
Interestingly, TSPO and its involvement in ROS modulation has also been linked to mitophagy [17,68], a process which has been implicated in the innate immune response [69]. Through an interaction with voltage-dependent anion channel 1 (VDAC1), TSPO was able to inhibit mitophagy by increasing ROS production in mouse embryonic fibroblasts and canine mammary gland epithelial cells. TSPO was able to modulate levels of LC3B-II activity and the ubiquitination of key proteins involved in mitophagy. TSPO knockdown cells also displayed greater ATP production in response to challenge, and greater glutathione levels, compared to overexpressing cells and controls [68]. The production of ROS is important in the functioning of microglial responses, and in a recent hypothesis proposed by Guilarte et al. (2016), an interaction between TSPO and NOX2, producing ROS, may activate Nrf2 which is involved in redox homeostasis and antioxidant responses. This complex may work to regulate release of ROS in microglia, particularly in states of chronic neuroinflammation where there is prolonged activation causing oxidative damage to surrounding tissue [70].
TSPO functioning has also been linked to mitochondrial cell death processes and apoptosis through oxidative stress [71]. PK11195 pre-treatment in U118MG glioblastoma cells was able to prevent cell death induced by cobalt(II) chloride by inhibiting apoptosis, mitochondrial membrane potential collapse and cardiolipin oxidation. These effects were also observed after TSPO knockdown in U118MG cells [72]. TSPO has also been linked to nitric oxide (NO) in the induction of cell death. In U118MG cells, application of PK11195 with the NO donor sodium nitroprusside (SNP) attenuated cell death, counteracted decreases in metabolic activity, and reduced mitochondrial membrane potential collapse compared to SNP or vehicle controls. This effect was similar in TSPO knockdown cells, adding further evidence to the involvement of TSPO in cell death processes [73]. A similar phenomenon has also been demonstrated in a Drosophila model. TSPO knockout Drosophila had reduced gamma radiation-induced apoptosis, as well as increased lifespan compared to wildtype flies. TSPO knockout flies also had decreased oxidative phosphorylation enzymatic activity, increased oxidative stress, and reduced mitochondrial respiration after 1 week [74]. This was also confirmed later in a Drosophila model of alcohol dependence, where flies with conditional TSPO knockout in neurons had increased ROS production in the brain, although this was sex-dependent [75].
In Vivo Evidence for TSPO in Mitochondrial Processes
In concordance with this data, other studies using in vivo models have also demonstrated a key role for TSPO in mitochondrial processes, and also in neuroprotection. In a model of postischemia reperfusion in rats, Ro5-4864 pre-treatment reduced ROS levels, reduced the activity of NADPH oxidase, and increased activity of complex 1 and III of the electron transport chain in heart tissue, thereby improving functional recovery [76]. This ligand has also been shown to have mitochondrially-targeted protective properties in a rat model of myocardial ischemia-reperfusion, where Ro5-4864 pre-treatment in hypercholesterolemic rats inhibited cholesterol accumulation and restored oxidative phosphorylation compared to vehicle controls [77]. In an in vivo model of cortical trauma injury in rats, administration of Ro5-4864 after cortical contusion reduced the severity of brain mitochondrial damage as seen under transmission electron microscopy (TEM). This was accompanied by a reduced lactate/pyruvate ratio, indicating less metabolic damage in these animals compared to vehicle treated animals, ultimately leading to improved recovery [78]. Neuroprotective effects of newer TSPO ligands have also been reported. In a rat model of cortical infarction, 2-(2-chlorophenyl) quinazolin-4-yl dimethylcarbamate (2-Cl-MGV) administration after cerebral artery occlusion was found to prevent the collapse of mitochondrial membrane potential and reduce cytochrome c levels in the thalamus of injured rats. 2-Cl-MGV also rescued cognitive impairments and neuronal loss after injury compared to vehicle controls [79]. In a mouse model of spinal cord injury, administration of the TSPO ligand ZBD-2 following injury downregulated TSPO expression, reduced levels of inducible nitric oxide synthase (iNOS) and malondialdehyde (MDA), and increased levels of superoxide dismutase (SOD) in serum, whilst also reducing neuronal loss after injury [80], hence having antioxidative properties. It is therefore apparent that there is much emerging literature surrounding the role of TSPO in mitochondrial processes, including energy metabolism and ROS generation (Table 1). This is also important in the context of immune responses, as these mechanisms are intrinsically involved in the pro-inflammatory responses of microglia ( Figure 1).
Figure 1.
Overview of the translocator protein (TSPO) in the inflammatory responses of microglia and its interaction with mitochondrial processes. Under stress conditions, TSPO is upregulated in activated, pro-inflammatory (M1) microglia. Located on the outer mitochondrial membrane, TSPO interacts with reactive oxygen species (ROS), a key part of the microglial inflammatory response. TSPO also interacts with inflammatory transcriptional pathways including MAPK and the NLRP3 inflammasome, resulting in the release of cytokines. These processes can be modulated by TSPO ligands, and by genetic deletion of TSPO, indicating a key role for TSPO in these processes.
Microglial TSPO Expression and Modulation by TSPO Ligands
As a prominent molecular imaging biomarker of activated microglia in a range of pathologies, it is generally presumed that TSPO is an indicator of pro-inflammatory, aberrant microglial activation -hence "neuroinflammation". However, the precise role of TSPO in the inflammatory processes of microglia during active disease states remains unclear. Furthermore, this is complicated by the known complexity of the functional states of microglia, most likely a spectrum from M1/M2 responses, depending on the stimulus [81]. The question of where on the spectrum TSPO is most prominently upregulated has been recently addressed by several studies. In a human microglial cell line, it was recently demonstrated that TSPO is directly involved in the modulation of proand anti-inflammatory phenotypes. TSPO mRNA expression was found to be greatest in cells after exposure to pro-inflammatory stimulation with IL-1β and IFN-γ. Differential secretion of pro-and anti-inflammatory cytokines could also be modulated by TSPO ligands and genetic knockdown. This was underpinned by mitochondrial ROS modulation, and NF-κB pathway expression, thereby underscoring TSPO in the mitochondrial processes of inflammatory responses [82]. Similarly, lipopolysaccharide (LPS) stimulation of mouse primary microglia resulted in increased TSPO expression in M1 microglia, though not in IL-4 treated M2 microglia. Thus, it was concluded that TSPO is specifically a biomarker of pro-inflammatory processes [83], a finding which has also been validated more recently in a similar experimental approach [84].
These conclusions have also been supported by studies using TSPO ligands. In primary human microglia, pre-treatment with PK11195 inhibited LPS-activation, reduced COX2 and TNF-α production, as well as calcium influx compared with vehicle treated controls [85]. Importantly, the morphology of microglia, which also reflects functional state, was similar to controls, thereby demonstrating the modulatory effect of PK11195 on microglial function and morphology [85]. In rat primary microglia, PK11195 has also been shown to inhibit increases in NO after LPS stimulation [86]. In primary mouse microglia, Etifoxine and PK11195 pre-treatment before toll-like receptor ligand activation were able to reduce microglial activation and production of TNF-α, CCL2 and IL-6 [87]. Vinpocetine, a specific TSPO ligand, was also shown to inhibit the activation and proliferation of LPS-stimulated or oxygen-glucose deprived BV2 cells compared to untreated controls. Furthermore, Vinpocetine was found to reduce the release of NO, IL-1β, IL-6 and TNF-α, whilst simultaneously suppressing NF-κB and AP-1 upregulation, which are important signalling transduction pathways in the inflammatory response [88].
Not all studies have demonstrated the same immunomodulatory effect of TSPO ligands in pro-inflammatory suppression. PK11195 and Ro5-4864 treatment in rat primary microglia was found to increase the rate of phagocytosis, cell proliferation, ROS generation (also acting through NADPH oxidase), and IL-1β secretion, thereby increasing the pro-inflammatory capacity of microglia [91]. In TSPO knockdown BV2 cells challenged with LPS, increased ROS production, TNF-α expression and microglial proliferation rate has also been reported [92]. This same effect was also reported using the RAW 264.7 macrophage cell line, where TSPO knockdown cells exposed to Hemin activation increased TNF-α and IL-6 release compared to scramble controls [93]. In human cells, differential effects have also been reported. In human monocyte-derived macrophages, induced to an M1 state, TSPO mRNA expression was reduced compared to controls. There was also no difference in TSPO expression in M2 stimulated macrophages compared to control [94]. In human primary monocyte-derived macrophages, stimulated with LPS/IFN-γ, decreased TSPO gene expression and radioligand binding has also been reported in another study [95]. These results may reflect the known complexity of the functional and activation states of microglial cells, that may also be species dependent and dependent on the concentration of ligand applied.
Molecular Pathways of TSPO Immunomodulation
At the molecular level, the TSPO activation pathway interacts with downstream inflammatory effectors. Ro5-4864 application in THP-1 monocytes and bone marrow-derived macrophages has been shown to inhibit NLRP3 inflammasome complex activation and assembly after ATP stimulation, accompanied by reduced caspase-1 activation, IL-1β and IL-18 secretion [20]. TSPO transcription and expression is driven by c-Jun and STAT3 through the MAPK and PKCε signal transduction pathways, all of which are key to the induction of an inflammatory response [96][97][98]. ROS is also intrinsically linked to this pathway, activating PKCε-dependent pathways and c-Jun to control TSPO transcription [99]. PK11195 has also been shown to modulate genes and transcription factors in U118MG glioblastoma cells, particularly those related to cell viability, cell death, proliferation, and tumorigenesis, therefore showing that TSPO is involved in mitochondria-nuclear signalling and can modulate gene expression [22]. In the human macrophage cell line THP-1, exposure to Midazolam, a TSPO binding agent, suppressed LPS-induced inflammatory responses, including IL-6 expression, NO and NF-κB and MAPK activation. This was specific for TSPO, as TSPO knockout cells did not show this effect [100]. Hence, TSPO interacts with inflammation pathways at the transcriptional level, manifesting in the immunomodulatory effects described in this review.
In Vivo Immunomodulation of TSPO Ligands
Using in vivo models, TSPO ligands have been shown to modulate inflammation after injury or in certain disease states, thereby demonstrating neuroprotective properties (Table 2). In the earliest study examining this effect, Zavala et al. [101] demonstrated that administration of PK11195 and Ro5-4864 in mice impaired the release of IL-1β, TNF-α and IL-6 by peritoneal and spleen macrophages, accompanied by impaired oxidative respiratory burst of these macrophages. Since this finding, there have been several other studies using different disease and injury models demonstrating analogous effects. In an excitotoxic rat model of Huntington's disease, PK11195 administration reduced the number of activated microglia after injury, and also reduced IL-1β, IL-6, TNF-α and iNOS mRNA expression after injury. This was also accompanied by a reduction in 4-HNE (lipid peroxidation) and 8-OHdG (oxidative DNA damage), alongside increased neuronal cell survival in the striatum near the site of injection, thereby conferring an anti-inflammatory and neuroprotective effect [102].
A similar phenomenon was also demonstrated with 3 novel pyrazolopyrimidine TSPO ligands; DPA-713, DPA-714 and propargyl-DPA in a quinolinic acid model of excitotoxic neurodegeneration. All pyrazolopyrimidine ligands decreased microglial activation and promoted neuronal survival in the injected striatum compared to vehicle controls [103].
To emphasise the neuroprotective potential of a variety of TSPO ligands, several other studies have demonstrated improved progression, symptoms and functional recovery in various disease and injury models following ligand administration [104,105]. Etifoxine was found to promote greater regeneration of myelinated axons compared to vehicle controls 2 weeks after sciatic nerve crush injury. This was accompanied by increased neuronal survival, reduced activation of macrophages, and reduced pro-inflammatory cytokine secretion. Ultimately, this manifested in the recovery of sensory and motor functions [106]. Etifoxine administration has also been found to improve scores in behavioural testing and rescues from neuronal degeneration in a rat model of traumatic brain injury, accompanied by reduced microglial activation and reduced cortical concentrations of IL-1α, IL-1β, IL-6, CCL2 and TNF-α [107]. In an Alzheimer's disease mouse model, application of Ro5-4864 attenuated the accumulation of amyloid beta plaques in the hippocampus and decreased microglial activation. Importantly, this was accompanied by improved behaviour and cognition [108]. In a study using a retinal degeneration mouse model, application of XBD173, another new generation TSPO ligand prevented microglial reactivity in the retina when exposed to injury and reduced both IL-6 and CCL2 gene expression. These findings were accompanied by prevention of retinal degeneration after injury, hence drastically inhibiting the progression of disease [109]. XBD173 has also been shown to decrease pro-inflammatory cytokines and delay Multiple Sclerosis progression and symptoms in an experimental autoimmune encephalomyelitis (EAE) mouse model [110], and also demonstrated efficacy in attenuating microglial activation and neuronal loss in an MPTP Parkinson's disease model [111].
Future Directions
This review has presented the growing body of literature investigating the role of TSPO in inflammatory responses and in mitochondrial processes, where function has largely been explored through pharmacological studies. Whilst the diazepam derivative Ro5-4864 is regarded as selective for TSPO, careful reading of earlier studies reports the possibility of some remnant binding to GABA-ergic sites [112,113]. Hence, a degree of caution is necessary in the interpretation of in vitro observations where high concentrations of Ro5-4864 have been used. Whilst for PK11195 no other obvious binding site is known, the compound might insert itself into the lipid bilayers [114,115], an interaction that speculatively may contribute to the recent observation that the residence time of PK11195, rather than merely its binding affinity to TSPO, may determine its physiological activity [116]. Nonetheless, the continual study of novel TSPO binding agents will be important for the development of therapeutic compounds for a range of disorders. The therapeutic potential of TSPO ligands have already been demonstrated in combination with photodynamic [117] and sonodynamic therapy [118], where the ability of TSPO to modulate ROS and apoptosis mechanisms can be exploited. Whilst much has been learned through pharmacological targeting of TSPO, deletion studies of TSPO are comparatively lacking. Further studies using TSPO deficient cells and animal models under stress conditions, for example, after irradiation, are needed to clarify the exact role of TSPO in oxidative stress responses and microglial responses. Further studies using various experimental approaches are also required to address the discrepancies in data looking at ROS and energy metabolism, where pharmacological evidence and genetic deletion studies give sometimes inconsistent results. This may be owing to the differences between TSPO knockdown versus knockout, and the range of ligand concentrations used in studies. Studies using TSPO knockout material are also necessary in order to determine the specific effects or off-target effects of novel compounds.
The most widely studied context of TSPO has been centred around pro-inflammatory responses specifically. However, there are studies that demonstrated no difference or even decreases in TSPO expression after exposure to a pro-inflammatory stimulus in vitro [94,95]. It is also interesting that in a TSPO deficient mouse model, the ability to mount a local microglial response to nerve cell injury was not compromised [41], and in a more recent TSPO knockout model, TSPO deficiency did not affect microglial number or morphology in models of retinal degeneration [119]. The complex spectrum from pro-to anti-inflammatory processes in microglia has not yet been fully characterised, and the extent to which TSPO expression levels may provide an indication of the balance between pro-and anti-inflammatory tissue responses remains open. The terms "microglial activation" and "neuroinflammation" are perhaps too broad and require comprehensive assessment in response to specific stimuli, and may involve systems biological approaches going forward [120].
Even in vivo, PET studies have also reported a decrease in TSPO binding in certain disorders. For example, in a mouse schizophrenia model, TSPO radioligand binding was decreased in the prefrontal cortex, which did not match the concomitant increases in inflammatory cytokine levels [34]. In recent studies, evidence has confirmed that there is constitutive TSPO expression in cell types other than activated microglia in the brain. This includes, for example, neural stem cells, tanycytes, and most abundantly, vascular endothelial cells across the brain [26,[121][122][123]. This has brought into question the approaches used for TSPO signal quantification in molecular imaging studies, and also raises questions as to the interpretation of imaging studies where a downregulation of TSPO is reported. Notably, when reductions below normal levels are observed, this may reflect a reduction in constitutively expressed TSPO which appears to be mainly located in the vasculature. Therefore, investigating other known cell types of TSPO expression over longer periods of time will be useful in order to ascertain whether the cumulative change in the regional expression levels of TSPO are due to a change in the number of cells expressing inducible TSPO, or whether indeed a downregulation of TSPO takes place in individual cells in which TSPO increases had been induced before. Furthermore, investigating other known cell types of TSPO expression will also be useful for future investigations into the functions of TSPO. This may be particularly pertinent given that the cell types of highest TSPO expression appear to be mitotically active, proliferative cells, further pointing to a fundamental role for TSPO in energy metabolism. | 6,370 | 2020-02-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
CRISPR/Cas9-Mediated Zebrafish Knock-in as a Novel Strategy to Study Midbrain-Hindbrain Boundary Development
The midbrain-hindbrain boundary (MHB) acts as an organizer and controls the fate of neighboring cells to develop into either mesencephalic (midbrain) or metencephalic (hindbrain) cells by secreting signaling molecules like Wnt1 and Fgf8. The zebrafish is an excellent vertebrate model for studying MHB development due to the ease of gene manipulation and the possibility of following cellular dynamics and morphogenetic processes using live imaging. Currently, only very few reporter and/or Cre-driver lines are available to study gene expression at the MHB, hampering the understanding of MHB development, and traditional transgenic technologies using promoter/enhancer fragments or bacterial artificial chromosome (BAC)-mediated transgenesis often do not faithfully recapitulate endogenous expression patterns. In contrast, CRISPR/Cas9-mediated genome editing technology now provides a great opportunity to efficiently knock-in or knock-out genes. We have generated four CRISPR/Cas9-based knock-in fluorescent reporter lines for two crucial genes involved in MHB development, namely otx2 and pax2a. The coding sequences of the reporters were knocked-in upstream of the corresponding ATG and are, thus, under the control of the endogenous promoter/enhancer elements. Interestingly, this strategy does not disturb endogenous gene expression. Using the fast maturing fluorescent protein reporter, Venus, enabled us to follow MHB development using cell tracking and live imaging. In addition, we show that these reporter lines label various neuronal and glial cell types in the adult zebrafish brain, making them highly suitable for investigating embryonic and adult midbrain, hindbrain, and MHB development.
INTRODUCTION
The boundary between the midbrain (mesencephalon, mes) and the hindbrain (metencephalon, met) is essential for establishing a clear demarcation between the midbrain and the anterior hindbrain. This midbrain-hindbrain boundary (MHB), also known as the isthmic organizer or midbrain-hindbrain organizer, acts as a local signaling center (Wurst and Bally-Cuif, 2001;Raible and Brand, 2004;Rhinn et al., 2006;Dworkin and Jane, 2013). The MHB forms at the interface of two transcription factor domains in the neural plate epithelium, namely the anterior Otx domain and the posterior Gbx domain (Rhinn et al., 2003). This interface is established by mutual transcriptional repression and initiates formation of the prospective MHB. Successively, morphogens such as Wnt, Fgf, and transcription factors like Engrailed1/2, and Pax2/5/8 induce the formation of the MHB, and their subsequent interplay is critical for maintenance of the MHB (Rhinn and Brand, 2001;Wurst and Bally-Cuif, 2001;Raible and Brand, 2004;Rhinn et al., 2006;Dworkin and Jane, 2013). The above-mentioned factors (Otx,Gbx,Wnt,Fgf8,Pax,and Eng) form the core of the MHB signaling machinery and a disruption of any of these factors interferes with the formation and function of the MHB.
Several model organisms, including chicken, mouse, and zebrafish, have been extensively used to understand the complex network of genes and their interactions during MHB development (Martinez-Barbera et al., 2001;Raible and Brand, 2004;Zervas et al., 2004;Rhinn et al., 2006;Sunmonu et al., 2011;Tossell et al., 2011). The zebrafish model has been instrumental in the identification of several genes that are essential for MHB development due to the availability of numerous mutants obtained from large-scale mutagenesis screens (Brand et al., 1996;Schier et al., 1996).
The zebrafish is an ideal model to study vertebrate brain development, and especially patterning in central nervous system, beginning from the neural plate stages because of the following reasons. (A) Fertilization is external and a single female can produce several hundred eggs (>200) in a single spawning; (B) the transparent nature of the embryos allows real-time imaging of the developing embryos; (C) fast embryonic development-a fertilized egg develops into a larva with most organs fully formed within 3 days; and (D) the possibility of manipulating genes and the ease of performing both loss-and gain-of-function experiments. Besides neurobiology studies in the embryo and larva, the adult zebrafish brain has widespread proliferative neural stem cells (neurogenic zones) distributed along the rostro-caudal axis (Adolf et al., 2006;Grandel et al., 2006). The zebrafish has also become a valuable model system for understanding neural stem cell heterogeneity, adult neurogenesis under homeostasis and injury, and regeneration (reviewed in Kizil et al., 2012;Grandel and Brand, 2013;Alunni and Bally-Cuif, 2016). Therefore, the ability to combine genetic manipulations with the application of advanced microscopic techniques, both in developing and adult animals, makes zebrafish a unique model to study vertebrate neurogenesis.
Effective study of adult neurogenesis or regeneration often requires labeling of specific cell types such as stem cells or neurons. The midbrain tectum, with its stratified cellular organization, is not only structurally similar to the mammalian cortex but is also involved in visual input processing and coordination of goal directed movements. Thus, it collectively acts as a major visual processing center in the brain. To successfully understand visual processing mechanisms, it is imperative to label individual cells or a group of cells in a neuronal network (Robles et al., 2011), and a majority of the previous studies mostly relied on neuroanatomical studies in other teleosts for such insight (Meek and Schellart, 1978;Meek, 1981). Nevertheless, few studies were successful in labeling cell populations of the midbrain tectum using Tol2-based transgenesis or gene-trap based random integration methods (Scott and Baier, 2009;Muto et al., 2013). Until recently, genetic manipulations in the zebrafish mainly entailed constitutive mutants generated using random mutagenesis screens or transposon-mediated random integration of transgenic constructs. Although, cellspecific promoters of interest would still be active in the adult fish, random integration often results in ectopic expression and/or gene silencing, thereby limiting the ability to consistently label specific cell types, especially in adult tissues. Thus, new genetic tools that reliably label cells by inserting reporter constructs at their native promoter sites would be of great interest.
Recent advances in genome editing that utilize sequencespecific DNA nucleases like Zinc finger nucleases, TALEN, and more importantly CRISPR/Cas9, have opened hitherto non-existent opportunities to knock-out or knock-in genes at precise locations in the zebrafish genome (Jao et al., 2013;Auer et al., 2014;Hoshijima et al., 2016). Double stranded breaks (DSB) created by the above-mentioned systems trigger cell repair mechanisms like the non-homologous end-joining (NHEJ) pathway that result in site-specific insertion/deletion (indel) mutations in the corresponding target sites. Recently, Auer et al. (2014) showed that reporter constructs could be efficiently integrated at the target site (TS) by providing a bait plasmid containing the TS, and have used this homologyindependent knock-in system to convert GFP reporter lines into Gal4 driver lines or to directly knock-in Gal4 transgenes. Adopting this strategy, we have generated knock-in reporters by targeting them to a 5 ′ sequence upstream of the ATG for some of the essential genes involved in MHB development, namely, Otx2 and pax2a. These transgenic lines act as read-outs for gene promoter activity and provide new tools to observe MHB development in real time.
Zebrafish Strains and Maintenance
Zebrafish (Danio rerio) embryos were obtained by natural spawning. Both embryos and adults were raised and maintained at 28.5 • C with a 14-h light and 10-h dark cycle (Westerfield, 2000;Brand et al., 2002). Embryos were staged as hours post fertilization (hpf) as described previously (Kimmel et al., 1995). The wild type strain AB was used to generate reporter knock-in lines, and transgenic fish lines were maintained as outcrosses. Neither the larvae nor the adult fish from the generated reporter lines showed any physiological or behavioral abnormalities.
Molecular Cloning
Genomic DNA from the wild type strain AB was used to amplify bait sequences by PCR (Phusion Polymerase, Thermo Fischer) using primers listed in the Supplementary Table S2. To generate donor plasmids, baits were cloned into a pCS2+ or Topo PCR II vectors (Invitrogen) containing the coding sequence for the Venus fluorescent protein or turboRFP (promoterless tRFP plasmid, Evrogen). The CMV promoter was later removed from the pCS2+ vector. All constructs were verified by sequencing.
Single-Guide RNA (sgRNA) Design and Off Targets
The sgRNA sequences targeting the NGG-PAM motif were identified at about 500 base pairs upstream of the ATG start codon for a specific gene of interest. CHOP-CHOP web tool was used for selecting target sites (Montague et al., 2014). The sgRNA designed for otx2 had the sequence GGAACccggCTA ATTGTCTCAGG while that for Pax2a was GGGGggatctGG GAAGGAGGGGG; the PAM sequences are underlined. Loss of the restriction site was used to estimate the efficiency of the sgRNA (HpaII and XhoII respectively, marked in lowercase). No FIGURE 1 | Scheme of the knock-in strategy. A target site (TS), located ∼500 base pairs upstream of the ATG in the non-coding region in the gene of interest were chosen. The bait plasmid was constructed by cloning 1 kb of the upstream sequence, including the target site, into a plasmid with the desired fluorescent reporter and poly A (PA) sequence. The bait plasmid, sgRNA against the target site, and Cas9 mRNA were injected at the 1-cell stage. The Cas9 protein creates double stranded breaks (DSB) at both TS, i.e., genomic locus and bait plasmid, and the linearized plasmid bait is integrated by homology independent repair. Forward integration of bait plasmid will result in expression of the fluorescent reporter that matches the expression pattern of the gene of interest. Primer pairs (A+B, C+D) can be used to verify the 5 ′ and 3 ′ junctions of the knock-in, respectively. off-targets were identified for the chosen TS, using CHOP-CHOP i.e., there were no genomic targets with 2 bp mismatches (Cong et al., 2013). Further, the transgenic reporter lines have been outcrossed for more than 6 generations to date, thereby diluting out potential off target mutations.
sgRNA, Cas9 Generation, and Injection into Zebrafish Embryos
The sgRNA and Cas9 mRNA were prepared as previously described (Jao et al., 2013), and plasmids for generating mRNA were a gift from the Chen and Wente labs (sourced from addgene). All injections were carried out in the wild-type AB strain embryo at the 1-cell stage. Each embryo was injected with a 1 nl solution containing 35 ng/µl of sgRNA, 150 ng/µl of Cas9 mRNA, and 25 ng/µl circular donor plasmid. The sgRNA and bait plasmid concentrations were optimized such that at least 50% of the injected embryos survived and showed normal development at 24 hpf. The injected embryos were monitored for the next 5 days and about 100 embryos were raised to adulthood for each transgenic line.
Identification of Founders and Genotyping
Injected embryos were raised and outcrossed with the wild type strains WIK or TL. Founders were identified by screening F1 embryos for the presence of a fluorescence signal at 24 hpf. The screening was stopped when 2 founders were identified for each transgenic line. Genomic DNA was isolated from individual F1 embryos, and PCR amplification and subsequent sequencing were used to verify integration at the 5 ′ and 3 ′ junctions. F1 embryos with a fluorescence signal were raised and outcrossed to different wild-type strains (AB, WIK, or TL) between generations to reduce the general effects of inbreeding.
RNA Extraction and Quantitative Real Time PCR (qRT PCR)
For both otx2 and pax2a, the Venus knock-in fish were crossed with the tRFP knock-in line and the double-positive (Venus + and tRFP + ) embryos were sorted at 48 hpf. Respective wildtype control embryos (double-negative) were also collected at 48 hpf. The embryos were pooled (n = 15), lysed in extrazol (BLIRT S.A.), RNA extracted, and treated with DNAse. One-step real time reverse transcription PCR (Takara) was performed on biological (n = 3) and technical (n = 3) replicates to quantify expression of otx2 and pax2a in the double-positive transgenic embryos (Venus + and tRFP + ) and was compared with that of wild-type embryos. Beta-actin was used as a house keeping gene to normalize the expression values. Fold changes were calculated using the 2 − C T Method (Livak and Schmittgen, 2001) and the two tailed, unpaired "t"-test was used to calculate statistical significance at a "p"-value of 0.05 (Graph pad prism, ver. 5.0).
Tissue Preparation
Embryos (24 hpf) were fixed with 4% paraformaldehyde (PFA) and stored in 100% methanol at −20 • C. For adult fish, fish aged between 6 and 8 months were killed by an MS-222 overdose and the heads harvested after carefully removing the skull roof. Fish heads were fixed overnight in freshly prepared 4% PFA in 0.1 M phosphate buffer (PB), pH 7.4. Fixed samples were subjected to decalcification overnight in PB containing 0.5 M EDTA and 20% sucrose prior to embedding in 7.5% gelatin and 20% sucrose in PB. Next, fish heads were instantly frozen on dry ice and cryo-sectioned at 7-10 µm thicknesses on a Microm HM 560 cryostat. Cryopreserved heads were stored at -80 • C and the cryo-sectioned slides were stored at −20 • C for subsequent immunohistochemistry (IHC).
In situ Hybridization
Embryos (24 hpf) were fixed in 4% PFA and stored in 100% methanol at −20 • C. Whole mount in situ hybridization was performed as previously described (Reifers et al., 1998). Briefly, digoxigenin (DIG) or fluorescein-labeled probes, synthesized from linear DNA using a RNA labeling and detection kit (Roche), and hybridized probes were detected using anti-digoxigenin or anti-fluorescein antibodies. Antibody staining was visualized using BM purple (digoxigenin) or fast red (fluorescein). The stained embryos were dissected with sharpened tungsten needles, FIGURE 4 | Characterization of the 5 ′ and 3 ′ junctions of the knock-in at the otx2 locus. Representative gel pictures of the otx2:venus (A) and otx2:tRFP (B) knock-in alleles from one founder (lanes 1 and 3) or wild-type siblings (lanes 2 and 4). Lanes 1 and 2 show the 5 ′ junction PCR (primer pair A+B from Figure 1) and lanes 3 and 4 show the 3 ′ junction (primers C+D from Figure 1). (C) DNA sequence analysis of the 5 ′ and 3 ′ junctions and mutations; "−" denotes deletion and "+" denotes insertion; inserted base pairs are marked in blue.
Frontiers in Neuroanatomy | www.frontiersin.org thick sections mounted in glycerol, and sections imaged in a Zeiss Axioplan microscope. In situ probe staining matched previously described expression for both otx2 (Mercier et al., 1995) and pax2a (Krauss et al., 1991). For adult zebrafish brain sections, freeze-thawed and air-dried sections were treated with 100% Methanol (500 µl/slide) for 10 min, washed in PBS-TritonX100 (PBS-Tx) buffer prior to incubation with in situ probe (1:100 dilution), and denatured at 70 • C in hybridization buffer. Hybridization was done overnight at 60 • C in a humidified chamber. Excess/unbound probe was removed by rigorous washing with 1x SSC/50% formamide solution at 62 • C. Sections were then washed with maleic acid buffer containing Tween-20 (MABT) at room temperature (RT). Sections were blocked in DIG blocking reagent (Roche) prior to anti-DIG antibody (1:2,000) incubation overnight at 4 • C followed by washes with MABT solution to remove excess antibody. Sections were treated with NBT/BCIP diluted in staining buffer (1:17, NTMT) and the reaction was developed at RT until a signal appeared on the sections. Staining reaction was stopped by washing with PBS and slides were mounted with 80% glycerol. Images were acquired on a Zeiss Apotome using a differential interference contrast (DIC) filter and processed using ZEN Blue (ver. 2.3), Adobe Photoshop, and Adobe Illustrator (ver. CS5 and CS6) software.
Live Imaging
Embryos were treated from 20 to 24 hpf with 1-Pheny 2-thiourea (PTU) to block pigmentation and with MS-222 for anesthesia, mounted on a glass bottom dish (MatTek) in 1% low melting agarose, and imaged on a Zeiss LSM 780 or Leica SP5 microscope. Images were analyzed using FIJI (open source software) or Imaris (ver. 7, Bitplane), respective TIFF files generated, and figures assembled in Adobe Photoshop (ver. CS5 or CS6). For timelapse imaging, tissue sections spanning 30 µm, with a Z interval Frontiers in Neuroanatomy | www.frontiersin.org of 2 µm, were imaged every 8 min for about 6 h at 28 • C on LSM780 (Zeiss) microscopes. Maximum intensity projections of fluorescence and transmitted light images were generated using Imaris (ver. 7, Bitplane).
Targeted Knock-in at the otx2 Locus to Generate Reporter Lines
To test if the CRISPR/Cas9-mediated strategy can be used for knocking-in reporters into the zebrafish genome, we chose otx2 as a candidate gene because, first the caudal limit of the Otx2 marks the MHB, and second, no transgenic otx2 reporter lines that recapitulate its endogenous expression are currently available (Kurokawa et al., 2006). Fast maturing fluorescent proteins such as Venus and turboRFP (tRFP) were used as reporters. The target site (TS) was selected at about 500 base pairs upstream of the transcription start site of otx2, and about 1 kb of bait that included the target region was amplified and cloned in front of the fluorescent reporter (Figure 1, scheme). The target site sequence was verified prior to cloning by DNA sequencing. The efficiency of target-site-cutting was assayed as follows. Both sgRNA and Cas9 were injected into the 1-cell stage embryo, and genomic DNA from individual 24 hpf embryos was isolated for PCR amplification. Loss of the restriction site was used to estimate the efficiency of the sgRNA, and those with >50% efficiency were selected (data not shown). Next, the bait plasmid was injected along with the sgRNA and Cas9, and concurrent double strand breaks were generated in the genomic target locus and in the plasmid DNA, resulting in plasmid integration at the target locus, most probably due to the highly active non-homologous end enjoining (NHEJ) repair mechanism of the cells. To identify founders, adult F0 fish were outcrossed with wild-type animals and F1 embryos were screened for fluorescence. Positive embryos and a representative embryo at 24 hfp are shown (Figures 2A,B, 3A,B). Reporter expression perfectly matched the expected expression pattern of Otx2 (Langenberg and Brand, 2005) as a sharp boundary was observed at the MHB, with no ectopic fluorescence in the non-Otx2 + regions. Two founders were identified for each construct with no difference in reporter gene expression pattern among them (data not shown). The knockin was verified by PCR using primers designed such that the forward primer annealed at the expected knock-in genomic locus (but outside the bait sequence) and the reverse primer within the fluorescent reporter donor plasmid (Figures 4A,B). Sanger DNA sequencing confirmed the knock-in location and showed indels at the 5 ′ and 3 ′ integration site ( Figure 4C). Germline transmission rates of successful founders were 4% for otx2-Venus and 7.6% for otx2-tRFP reporter lines (summarized in Supplementary Table S1). Currently, otx2 reporters have been outcrossed for more than six generations and transgene expression has remained stable (data not shown). Further, such outcrossing of the transgenic reporters with wild-type strains over several generations would dilute out any potential off target mutations.
Knock-in Alleles Remain Functional
Next, we addressed if the knock-in reporters that include a polyA signal and the vector backbone sequences at the 5 ′ upstream of ATG compromise endogenous gene expression. Thus, we crossed the Venus knock-in fish with the tRFP knockin line and sorted embryos at 24 hpf into either single-positive (Venus + or tRFP + ) or double-positive (Venus + and tRFP + ) embryos. Morphological examination showed no differences between the single-and double-positive groups ( Figure 5A); subsequent analysis of MHB morphology and double in situ hybridization for otx2 and the MHB marker pax2a also showed no differences (Figure 5B). Further, quantitative real time PCR (qRT-PCR) showed no significant difference in otx2 expression levels in the double positive embryos compared to wild-type ( Figure 8D). These data suggest that reporter genes can be efficiently knocked-in at the otx2 locus, and that integration of a reporter plasmid into the non-coding region, upstream of ATG, does not interfere with endogenous gene expression or function.
Practical Application: Live Imaging MHB Development
Neural tube formation and the underlying cell-biology processes that generate the 3 dimensional structure of the vertebrate brain are of great interest. Hence, real time or time-lapse imaging Figure 1) and the right panel shows 3 ′ junction (primers C+D from Figure 1). (C) DNA sequence analysis of the 5 ′ and 3 ′ junctions and mutations; "−" denotes deletion. (D,E) A comparison of gene expression between homozygous embryos (otx2 or pax2a transgenic animals (TG) positive for both venus and tRFP) and wild-type (WT) embryos (48 hpf) showed no significant differences in the mRNA levels for otx2 (D) and pax2a (E). The two-tailed, unpaired t-test was used to calculate statistical significance; each point in the graph represents 1 sample, which contains a pool of 15 embryos. WT vs. otx2, p = 0.072, WT vs. pax2a, p = 0.288.
to understand these morphogenetic processes is a valuable approach. We utilized the reporter lines generated using the CRISPR/Cas9 system to observe MHB development in real time.
The otx2:tRFP reporter line was used to follow midbrain cells during MHB formation between 17 and 23 hpf, when the opening of the ventricular space and constriction at the MHB occur. In addition, otx2:tRFP + neural crest cells leaving the neural tube, as well as the otx2:tRFP + cells in the retina, can be observed (Figure 6 and Supplementary Movie 1). Consequentially, the ability to co-label cell membranes and nuclei will be valuable in further elucidating the morphogenetic processes that occur during MHB development, including cell shape changes, cell division, and acquisition of apical-basal polarity.
Targeted Knock-in at the pax2a Locus
To test the knock-in strategy on other genomic loci, we selected the pax2a locus, as it is one of the earliest MHB markers. Currently, there are no pax2a reporter lines that recapitulate all its endogenous expression domains (Picker et al., 2002). Again, a highly efficient sgRNA that cleaves the target site upstream of ATG was chosen (data not shown) and bait plasmids were constructed with Venus or tRFP as fluorescent reporters. Embryos were injected and founders were identified as explained above. Fluorescence was evident in the optic stalk, the MHB, and in the optic vesicle at 24 hpf ( Figures 7A-C); this expression pattern matches the reported expression pattern for pax2a (Krauss et al., 1991;Lun and Brand, 1998). Germline transmission rates for successful founders were 20% for Pax2a:venus and 2.8% for Pax2a:tRFP reporters (summarized in Supplementary Table S1). The knock-in was verified by PCR using primers designed such that the forward primer annealed at the expected knock-in genomic locus (but outside the bait sequence) and the reverse primer within the fluorescent reporter donor plasmid (Figures 8A,B). Sanger DNA sequencing confirmed the knock-in location and showed indels at the 5 ′ and 3 ′ integration site ( Figure 8C). To test if the knockin alleles still expressed endogenous pax2a, we crossed the Venus and tRFP reporter lines and found that the double positive embryos (Venus + and tRFP + ) were morphologically normal ( Figure 7D) and the pax2a mRNA remain unchanged in the double positive embryos as quantified by qRT-PCR ( Figure 8E). These data suggest that, similar to the otx2 knock-in, the pax2a locus can also be targeted for genome editing and that the reporter knock-in does not interfere with endogenous gene function.
otx2:Venus Labels Neurons and the Radial Glia in the Adult Zebrafish Midbrain
We next tested the applicability of the otx2:venus reporter line in adult fish. Anatomically (Figure 9A, schematic), cells expressing Venus were prominently present in the neuronal nuclei or in the neurons of the midbrain, and specifically in the tectum opticum (TeO), the periventricular gray zone of the optic tectum, and the hypothalamus (Figure 9B). Co-expression analysis of Venus, with either HuC/D (a pan-neuronal marker) or S100β (a marker for glial cells), revealed that otx2:venus is expressed mostly in HuC/D + neurons in the gray matter and in glial cells at the ventricular zone of the TeO (Figure 9C). Further, (Wullimann et al., 1996), tectal sub layers and neuronal types were interpreted based on Meek (1981) and Meek and Schellart (1978). Scale bars ( otx2:venus and HuC/D + cells were present in various cortical layers of the tectum, such as the stratum opticum (Figure 9D), stratum griseum et album superficiale, and centrale (Figure 9E), and the periventricular gray zone ( Figure 9F). However, not all HuC/D + neurons were labeled by otx2:venus in the tectum, suggesting that Otx2 is expressed only in a sub-population of neurons and that this line might be a good tool to study this specific neuronal network. Based on previously described anatomical distribution and cell shape characteristics (Meek and Schellart, 1978;Meek, 1981), Venus + cells appear to be type III horizontal neurons that belong to the stratum opticum ( Figure 9D), projection neurons of the cortical layers stratum griseum et album superficiale and centrale (Figure 9E), and XIV interneurons of the periventricular gray zone (Figure 9F). Apart from this, otx2:venus also labeled S100β + glial cells in the ventricular zone (anatomical location schematized in Figure 10A, overview images Figures 10B,C). S100β + cells are seen at the ventricular zone co-expressing otx2:venus (yellow arrowheads) while the neighboring otx2:venus + neurons are negative for S100β ( Figure 10D). Most importantly, in the adult midbrain, otx2:venus expression mirrored endogenous otx2 mRNA expression in the various tectal layers, as shown by in situ hybridization (Figures 11A,B). Taken together, otx2:venus CRISPR/Cas9 knock-in line appears to label various neuronal and radial glial populations in the adult zebrafish midbrain. The observed selective labeling presents a useful tool to study specific cells of interest while simultaneously overcoming the potential silencing effects that are often observed with other transgenic approaches.
pax2a:Venus Reporter Expression in the Adult Zebrafish Midbrain
The pax2a:venus reporter labeled several neuronal subpopulations (anatomical location scheme Figure 12A, overview images Figure 12B). Most importantly, in the adult midbrain, pax2a:venus expression mirrored endogenous pax2a mRNA expression, as shown by in situ hybridization (Figures 12B,C). Essentially, pax2a:venus expression was prominently seen in the neurons of the valvula cerebelli in the hindbrain (red dotted line), the dorsal tegmental nucleus of the midbrain (yellow dotted line area; Figures 12D,E), and other regions of the midbrain. Based on previous anatomical descriptions of the zebrafish brain (Wullimann et al., 1996), pax2a:venus cells could be mapped to neurons present adjacent (left side) to the lateral longitudinal fascicle (yellow dotted circle), with some of these cells possibly belonging to the perilemniscal nucleus (Figures 12F,G). The rostral tegmental nucleus, another neural nucleus, was also positive for pax2a:venus expression (Figures 12H,I). However, there were no Venus+ cells in the glial domain. These results clearly indicate that pax2a:venus, similar to otx2:venus, labels a subpopulation of neurons in the mid-and hindbrain in the adult zebrafish.
DISCUSSION
Here we demonstrate that fluorescent reporters can be efficiently knocked into a specific locus for a gene of interest using the CRISPR/Cas9 system as a genome-editing tool. We have showing radial glial cells at the ventricular zone of the midbrain tectum labeled by S100 co-expressing otx2:venus. Anatomical descriptions are based on the zebrafish brain atlas (Wullimann et al., 1996). Scale bars ( generated four knock-in reporter lines targeting two essential genes involved in MHB development, namely otx2 and pax2a. The exact mechanism of integration (knock-in) at the target site has not been explored in this study. However, given that NHEJ has a greater probability of occurrence in zebrafish (Hagmann et al., 1998;Dai et al., 2010) and homologous recombination (HR) of 5 kb or larger plasmids has not yet been demonstrated, we think that NHEJ is the most probable mechanism of integration. Other studies in zebrafish using similar strategies have also found homology independent mechanisms such as NHEJ to be the primary mechanism that drives integration (Auer et al., 2014;Kimura et al., 2014;Ota et al., 2016). The combination of methods used here, i.e., CRISPR and non-homologous end joining, has several advantages over BAC or promoter fragment-driven transgenesis. First, by knockingin the reporter just in front of the endogenous ATG, we utilize the native promoter and enhancer elements that regulate endogenous gene expression. Consequently, reporter expression essentially recapitulates native/endogenous promoter activity and gene expression patterns. Indeed, we show that both Venus and tRFP reporter expression precisely match endogenous gene expression of otx2 and pax2a, both in embryos and adults. The Tg(pax2a:GFP) e1Tg transgenic line that we previously generated (Picker et al., 2002) used native promoter/enhancer elements that recapitulated parts of the endogenous pax2a expression pattern. However, this line also showed ectopic GFP expression in the forebrain, and in rhombomeres 3 and 5 of the hindbrain. Attempts to establish a stable transgenic zebrafish line that recapitulated otx2 expression using combinations of various promoter/enhancer elements have failed (Kurokawa et al., 2006). Importantly, the knock-in reporter lines reported here also mirror endogenous gene expression patterns in the adult zebrafish brain, as defined by in situ hybridization. This aspect is valuable during both embryonic development and for identifying different neuronal and non-neuronal subtypes in the adult zebrafish brain. To our knowledge, this is the first study on reporter expression for otx2 and pax2a in the adult zebrafish brain. Further applications of these reporter lines include their use to better understand the molecular characteristics of Otx2 + and Pax2a + cells during different developmental and adult stages by subjecting them to fluorescence activated cell sorting (FACS) for transcriptomic and/or proteomic analyses.
Second, reporter knock-in at the non-coding region, just in front of the ATG did not negatively affect or nullify endogenous gene expression and is consistent with normal MHB morphology in the double transgenic reporters (Venus and tRFP positive). In contrast, recently, Ota et al. (2016) have reported a similar strategy to knock-in eGFP at the same genomic locus for pax2a, but the knocked-in allele generated a pax2a mutant allele resembling the homozygous pax2a null allele, no isthmus (noi; Brand et al., 1996). Differences in bait construction might account for why our strategy did not result in null alleles, compared to Ota et al. (2016). Specifically, in our strategy, the plasmid bait supplies the 500 base pairs of the pax2a 5 ′ sequence in front of the ATG, whereas Ota et al. (2016) used a universal bait sequence containing hsp70l promoter/enhancer elements and eGFP. Thus, our knock-in strategy can also be applied for generating Cre-or Gal4-driver lines that can be subsequently used for lineage tracing and loss-of-function studies. Recently, Suzuki et al. have reported the use of mini circles for highly efficient, homology-independent, targeted, in vivo integration. Using this method, they avoid plasmid backbone co-integration and their design of the targeting construct favors forward integration (Suzuki et al., 2016). The use of mini circles could be easily adapted to improve our strategy as it will avoid plasmid backbone integration, and the smaller size of mini circles may then improve targeting efficiency.
Third, the relatively high germ line transmission rates (ranging from 2.8 to 20%) will facilitate targeting of several gene loci as potential reporter lines. Hence, easy generation of Cre/Gal4-based driver lines is feasible because the target site resides in the non-coding region, making screening for in-frame insertions unnecessary. It is important to mention here that fluorescent protein expression is indeed a read-out of the promoter activity. It is for this reason that we chose fast folding proteins like Venus and tRFP as reporters, so that positive cells are labeled rapidly after promoter activation. However, one has to note that these fluorescent proteins have a half-life of about 24 h (Li et al., 1998), and thus, cannot be ideally used to study temporal dynamics; this could be overcome by using fast degradable fluorescent proteins. On the other hand, the persistence of fluorescent proteins can also be used advantageously for short-term lineage tracing and assessment of cell fate.
Zebrafish mutants with loss of function phenotypes such as the no isthmus (noi), acerebellar (ace), and spiel-ohne-grenzen (spg), were identified from large-scale mutagenesis screens and have been fundamental in elucidating the core gene network that regulates MHB formation and maintenance (Brand et al., 1996;Schier et al., 1996;Lun and Brand, 1998;Reifers et al., 1998;Reim and Brand, 2002). The underlying mutations are strong loss of function or null alleles that result in the survival of these mutants for only a few days after birth. Furthermore, very little is known about the expression patterns of the core MHB genes in the larval and adult brains and even less is known about their function under homeostatic and regenerative conditions. Thus, generating CreER driver lines using this knock-in strategy for important players of several gene families involved in MHB development (Otx, Gbx, Wnt, Fgf8, Pax, Eng) will facilitate loss-/gain-of-function studies that can be spatially and temporally tracked. We show that otx2:venus marks some of the neuronal and radial glial cells in the optic tectum and that pax2a:venus labels neurons in the valvula cerebelli of the hindbrain and other neural nuclei in the midbrain. However, further studies using various marker combinations are required to completely map the cell types that express Otx2 or Pax2.
AUTHOR CONTRIBUTIONS
GK and MB conceived the project and designed experiments. GK generated the transgenic zebrafish lines and characterized them in embryonic stages, AC generated data on the adult zebrafish brain, and AM generated reagents for in situ hybridization, acquired, and analyzed data from adult brain sections and performed q-RT PCR experiments. GK, AC, and MB wrote the manuscript.
FUNDING
GK was supported by post-doctoral fellowships from Swedish research council (Vetenskapsrådet) and an EMBO longterm fellowship (ALTF 350-2011). This work was also supported by an ERC advanced grant (Zf-BrainReg) and project grant of the German Research Foundation (Deutsche Forschungsgemeinschaft, project number BR 1746/6-1) to MB. | 7,700 | 2017-06-30T00:00:00.000 | [
"Biology",
"Medicine"
] |
Characterising Correlations between Electric Conductivity and Structural Features in Rotary Swaged Al/Cu Laminated Conductors
This study aims to characterize the correlations between electric characteristics and selected structural features of newly designed Al/Cu laminated conductors manufactured via room temperature rotary swaging. After swaging, the laminates with diameters of 15 mm were subjected to two different post-process annealing treatments. Structure analyses performed to evaluate the effects of thermomechanical processing were performed via scanning and transmission electron microscopies. Electric conductivity and resistivity of the laminates were experimentally measured and numerically simulated using models designed according to the real conditions. The results showed that the electric resistivity was affected by the grain size, bimodal grains’ distribution (where observed), the presence of twins, and, last but not least, dislocation density. Among the influencing factors were the area fractions of Al and Cu at the cross-sections of the of the laminated conductors, too. The results revealed that fabrication of the laminate via the technology of rotary swaging introduced more advantageous combinations of electric and mechanical properties than fabrication by conventional manufacturing techniques. The lowest specific electric resistivity of 20.6 Ωm × 10−9 was measured for the laminated conductor subjected to the post-process annealing treatment at 350 °C, which imparted significant structure restoration (confirmed by the presence of fine, equiaxed, randomly oriented grains).
Introduction
Generally, laminates are structures consisting of more than one material (element, alloy), and can be fabricated in various forms (layers, imposed fibres, wires, fibres embedded in resin, etc.). Metallic laminates, which can also be denoted as bimetallic composite materials, are popular modern industrial materials and can find their use e.g., in automotive, aerospace, and marine fields [1,2]. The fact that each of the components introduces specific advantageous properties enhances the performance of the final product.
Not only the design of the laminate, i.e., the selection of component metals and their localization, but also the production method and selected processing temperatures non-negligibly influence the final properties. Metallic laminates can be manufactured by methods (locally) introducing elevated temperatures, such as various types of welding and cladding [3][4][5][6][7]. However, such methods disadvantageously affect structures of the metallic components and promote the formation of brittle intermetallics. For this reason, the rate its electrically insulating properties. Room-temperature fabrication of Al/Cu or Cu/Al composite conductors of the required final dimensions, which can be used directly for rotor bar windings of asynchronous machines, via swaging is thus very promising.
The presented work aimed to characterize the correlation between structure characteristics and electric behaviour for a uniquely designed Al/Cu laminated composite. The laminate was fabricated via room-temperature rotary swaging, and subsequently subjected to two different heat treatments. Before fabricating an electric conductor, it is favourable to determine the type of current that will be used during its testing and usage. In other words, will the conductor be used to transfer alternate or direct current (the transfer efficiency is affected by the occurrence of skin effect for alternate current)? The herein presented Al/Cu laminate is considered to be applicable for the transfer of both direct and alternate electric currents. However, the main focus of this study was to characterize the electric behaviour of the laminate during direct current transfer. The electric characteristics were measured on the experimentally swaged (and heat treated) laminated rods, and also numerically simulated. Structure analyses were performed via scanning and transmission electron microscopies. Microhardness measurements were performed to supplement the structure observations.
Experiment
The materials used to produce the laminates were electro-conductive commercially pure (CP) Cu (Cu and 0.015% P, 0.002% Zn, 0.002% O), and electro-conductive CP Al (Al and 0.25% Fe, 0.21% Si, 0.04% Cu). The laminates were prepared by rotary swaging at room temperature (~23 • C). The unique stacking sequence of the Al/Cu laminate was designed based on our previous experience: The peripheral and axial regions of the laminate should consist of Cu, while the remaining volume of the laminate should consist of Al [32,33]. The Cu lamellas were located in a stellulate pattern within the Al matrix (the laminated workpiece is shown in the photo in Figure 1a). The diameter of the original composite workpiece, which was finally swaged down to the diameter of 15 mm, was 50 mm (the swaged laminated composite is shown in the photo in Figure 1b). The length of the original workpiece was 150 mm. The final reduction ratio for the laminate, calculated using Equation (1), was 2.4: where S 0 and S n are laminate cross-section areas at the input and output from swaging dies, respectively. However, the casting of Cu is a complex process performed at high temperatures, which can deteriorate its electrically insulating properties. Room-temperature fabrication of Al/Cu or Cu/Al composite conductors of the required final dimensions, which can be used directly for rotor bar windings of asynchronous machines, via swaging is thus very promising. The presented work aimed to characterize the correlation between structure characteristics and electric behaviour for a uniquely designed Al/Cu laminated composite. The laminate was fabricated via room-temperature rotary swaging, and subsequently subjected to two different heat treatments. Before fabricating an electric conductor, it is favourable to determine the type of current that will be used during its testing and usage. In other words, will the conductor be used to transfer alternate or direct current (the transfer efficiency is affected by the occurrence of skin effect for alternate current)? The herein presented Al/Cu laminate is considered to be applicable for the transfer of both direct and alternate electric currents. However, the main focus of this study was to characterize the electric behaviour of the laminate during direct current transfer. The electric characteristics were measured on the experimentally swaged (and heat treated) laminated rods, and also numerically simulated. Structure analyses were performed via scanning and transmission electron microscopies. Microhardness measurements were performed to supplement the structure observations.
Experiment
The materials used to produce the laminates were electro-conductive commercially pure (CP) Cu (Cu and 0.015% P, 0.002% Zn, 0.002% O), and electro-conductive CP Al (Al and 0.25% Fe, 0.21% Si, 0.04% Cu). The laminates were prepared by rotary swaging at room temperature (~23 °C). The unique stacking sequence of the Al/Cu laminate was designed based on our previous experience: The peripheral and axial regions of the laminate should consist of Cu, while the remaining volume of the laminate should consist of Al [32,33]. The Cu lamellas were located in a stellulate pattern within the Al matrix (the laminated workpiece is shown in the photo in Figure 1a). The diameter of the original composite workpiece, which was finally swaged down to the diameter of 15 mm, was 50 mm (the swaged laminated composite is shown in the photo in Figure 1b). The length of the original workpiece was 150 mm. The final reduction ratio for the laminate, calculated using Equation (1), was 2.4:
= ln
(1) where S0 and Sn are laminate cross-section areas at the input and output from swaging dies, respectively.
(a) (b) The swaged products were further subjected to two annealing regimes to characterize the effects of post-process heat treatments on their structures and electric characteristics. The first regime was carried out at the temperature of 250 • C (regime HT1), and the second one was performed at the temperature of 350 • C (regime HT2). The post-process annealing treatments were selected considering the data acquired during our previous researchwhen subjected to post-process annealing, structures of the swaged laminates exhibited the tendency to recover. Nevertheless, annealing at temperatures higher than 350 • C imparted grain growth and the formation of intermetallics [34]. For each of the heat treatment regimes, the electric furnace was heated to the required temperature. Then, the laminate was inserted into the heated furnace for a dwell time of 15 min. After the time dwell, the laminate was taken out of the furnace and cooled freely on air.
Subsequently, the electric characteristics of all the swaged and heat-treated laminates were measured. Electric resistivity of the rods was detected experimentally using the highly accurate Four-Wire Resistance Measurements method. During the measurements by this method, two pairs of electrodes were used: Sense probes, which measured the voltage drop on the determined length of the laminate, and source probes, which supplied current to the measured laminate [35]. The use of two pairs of separate electrodes ensured that no supply current passed through the sensing electrodes, and that the sensing electrodes measured the voltage drop V 1 only. The source probes were connected in series with a calibrated shunt resistor to provide an accurate current measurement; the values of the current supplied by the used source were between 0 and 100 A. The voltage drop V 2 monitored by the shunt resistor was directly related to the applied electric current via the K = 0.0004 Ω constant. Therefore, the current value could be characterized by the relation I 1 = V 2 /K. The sense probes monitoring the voltage drop V 1 during current flow were fixed on the measured laminate with the mutual distance dL = 500 mm. By using Ohm's law, Rs (electric resistivity of the swaged laminate) on the measured length L (500 mm) was further computed by dividing the voltage drop V 1 and electric current I 1 as Rs = V 1 /I 1 .
Measurements of electric resistivity Rs of the swaged laminated rods should be carried out quickly since the conductors exhibit the tendency to heat by the effect of current flow. During the measurements, the current value increased rapidly from 0 A to 100 A, and then dropped again to 0 A. For such rapid changes, manual data recording was not possible. For this reason, both the voltage drops V 1 and V 2 were detected synchronously using a DAQ (Data AcQuisition) card (NI-9238 type) and stored in a data file. The DAQ card had a 24-bit resolution at the voltage range of ±500 mV. This enabled us to achieve the minimum measurable sensitivity dU = 60 nV. The stored data then provided the basis for the calculation of specific electric resistivity ρ via Equation (2), where Before measuring the electric characteristics of the laminated composite rods, the data for the original Al and Cu materials were acquired. Figure 2 depicts the dependence of the voltage drop V 1 on the electric current for the original Al and Cu laminate components.
The measured values were smoothed by a linear fit for both metals. The slopes of the curves (V 1 /I 1 ) determine the resistivity R of the measured metals per 1 m of length. Structure analyses of the swaged (and heat treated) laminated rods were carried out by scanning and transmission electron microscopies (SEM and TEM). The samples for SEM electron backscatter diffraction (EBSD) observations, prepared from transversal cuts, were grinded manually, polished electrolytically, and observed using a Tescan Lyra 3 XMU FEG/SEMxFIB device equipped with a Symmetry EBSD detector (Tescan, Brno, Czech Republic). The analyses were carried out with the step of 0.5 µm on samples tilted by 70° and evaluated with the help of Aztec Crystal software (Oxford Instruments, Abingdon, UK). Detailed analyses of substructures of the Cu lamellas from the swaged and heat-treated laminates were performed using TEM (JEOL JEM-2100 device, JEOL, Tokio, Japan). The observations were performed at 200 kV on samples prepared using the focused ion beam (FIB) method assembled on the Tescan Lyra 3 XMU microscope. During FIB preparation, a lamella was milled with Ga ions in multiple steps to a final thickness of about 120 nm. The FIB lamella was taken from a transversal cut through a peripheral Cu lamella, and thus the TEM observations were carried out in a direction parallel to the swaging axis. The last experimental step involved Vickers microhardness measurements, which were performed using a Zwick/Roell testing device (Zwick Roell CZ s.r.o., Brno, Czech Republic). For each indent, the load was 200 gf, and the loading time was 10 s.
Numerical Analysis
To supplement the results of experimental investigations, numerical simulations of the current transfer were performed for the swaged laminated rods with diameters of 20 mm, 15 mm, and 10 mm. The geometries of the swaged rods models ensued from the experimentally observed ones. In other words, the locations and deformations of the individual composite components of the modelled rods were based on the locations and deformations of the Al/Cu laminate components observed experimentally via SEM. Structure analyses of the swaged (and heat treated) laminated rods were carried out by scanning and transmission electron microscopies (SEM and TEM). The samples for SEM electron backscatter diffraction (EBSD) observations, prepared from transversal cuts, were grinded manually, polished electrolytically, and observed using a Tescan Lyra 3 XMU FEG/SEMxFIB device equipped with a Symmetry EBSD detector (Tescan, Brno, Czech Republic). The analyses were carried out with the step of 0.5 µm on samples tilted by 70 • and evaluated with the help of Aztec Crystal software (Oxford Instruments, Abingdon, UK). Detailed analyses of substructures of the Cu lamellas from the swaged and heat-treated laminates were performed using TEM (JEOL JEM-2100 device, JEOL, Tokio, Japan). The observations were performed at 200 kV on samples prepared using the focused ion beam (FIB) method assembled on the Tescan Lyra 3 XMU microscope. During FIB preparation, a lamella was milled with Ga ions in multiple steps to a final thickness of about 120 nm. The FIB lamella was taken from a transversal cut through a peripheral Cu lamella, and thus the TEM observations were carried out in a direction parallel to the swaging axis. The last experimental step involved Vickers microhardness measurements, which were performed using a Zwick/Roell testing device (Zwick Roell CZ s.r.o., Brno, Czech Republic). For each indent, the load was 200 gf, and the loading time was 10 s.
Numerical Analysis
To supplement the results of experimental investigations, numerical simulations of the current transfer were performed for the swaged laminated rods with diameters of 20 mm, 15 mm, and 10 mm. The geometries of the swaged rods models ensued from the experimentally observed ones. In other words, the locations and deformations of the individual composite components of the modelled rods were based on the locations and deformations of the Al/Cu laminate components observed experimentally via SEM. -Components C-Represented the significant (i.e., central) part of the laminate, on which evaluation of the output parameters was performed. The basic parameters were the distribution of the current density across the cross-section of the swaged laminate, and the power loss occurring due to the flowing current.
Materials 2022, 14, x FOR PEER REVIEW 6 of 17 -components C-Represented the significant (i.e., central) part of the laminate, on which evaluation of the output parameters was performed. The basic parameters were the distribution of the current density across the cross-section of the swaged laminate, and the power loss occurring due to the flowing current. The electromagnetic model of the laminated conductor was used to perform a harmonic electromagnetic analysis using Ansys Electronic Desktop software. A harmonic electromagnetic simulation enabled us to determine the value of power loss dP (W) when the value of the current in the conductor was known. The value of the current used in the simulation, as well as during the experiments, was I = 100 A. After determining the values of transferring current (input parameter) and power loss (output parameter), the calculation of the total value of electric resistance of the laminated conductor was performed via Equations (3) and (4), where dP is the current loss (W), σ is the electric conductivity of the material from which the conductor is fabricated (Sm −1 ), J is the current density (A/m 2 ), dV is the element of the conductor volume (m 3 ), R is the resistance of the conductor (Ω), and I is the value of the supply current.
To determine the final value of electric resistance of the laminated rod, the basic equation for the calculation of the total resistance of net resistances in series and parallel circuits (see Figure 4) was used. The parallel resistance parts, RAli||RCui, represented the length portion of the laminated conductor selected for the elements i. In the elements (nodes) i of the conductor, the ideal mutual connection between the laminate components was considered. The final resistance of the entire length of the conductor then consisted of the sum of resistances for the elements i, which represented the parallel combination of n resistances determined by the cross-section of the conductor, and specific electric resistivity of the relevant metallic The electromagnetic model of the laminated conductor was used to perform a harmonic electromagnetic analysis using Ansys Electronic Desktop software. A harmonic electromagnetic simulation enabled us to determine the value of power loss dP (W) when the value of the current in the conductor was known. The value of the current used in the simulation, as well as during the experiments, was I = 100 A. After determining the values of transferring current (input parameter) and power loss (output parameter), the calculation of the total value of electric resistance of the laminated conductor was performed via Equations (3) and (4), where dP is the current loss (W), σ is the electric conductivity of the material from which the conductor is fabricated (Sm −1 ), J is the current density (A/m 2 ), dV is the element of the conductor volume (m 3 ), R is the resistance of the conductor (Ω), and I is the value of the supply current.
To determine the final value of electric resistance of the laminated rod, the basic equation for the calculation of the total resistance of net resistances in series and parallel circuits (see Figure 4) was used.
Materials 2022, 14, x FOR PEER REVIEW 6 of 17 -components C-Represented the significant (i.e., central) part of the laminate, on which evaluation of the output parameters was performed. The basic parameters were the distribution of the current density across the cross-section of the swaged laminate, and the power loss occurring due to the flowing current. The electromagnetic model of the laminated conductor was used to perform a harmonic electromagnetic analysis using Ansys Electronic Desktop software. A harmonic electromagnetic simulation enabled us to determine the value of power loss dP (W) when the value of the current in the conductor was known. The value of the current used in the simulation, as well as during the experiments, was I = 100 A. After determining the values of transferring current (input parameter) and power loss (output parameter), the calculation of the total value of electric resistance of the laminated conductor was performed via Equations (3) and (4), where dP is the current loss (W), σ is the electric conductivity of the material from which the conductor is fabricated (Sm −1 ), J is the current density (A/m 2 ), dV is the element of the conductor volume (m 3 ), R is the resistance of the conductor (Ω), and I is the value of the supply current.
To determine the final value of electric resistance of the laminated rod, the basic equation for the calculation of the total resistance of net resistances in series and parallel circuits (see Figure 4) was used. The parallel resistance parts, RAli||RCui, represented the length portion of the laminated conductor selected for the elements i. In the elements (nodes) i of the conductor, the ideal mutual connection between the laminate components was considered. The final resistance of the entire length of the conductor then consisted of the sum of resistances for the elements i, which represented the parallel combination of n resistances determined by the cross-section of the conductor, and specific electric resistivity of the relevant metallic The parallel resistance parts, RAli||RCui, represented the length portion of the laminated conductor selected for the elements i. In the elements (nodes) i of the conductor, the ideal mutual connection between the laminate components was considered. The final resistance of the entire length of the conductor then consisted of the sum of resistances for the elements i, which represented the parallel combination of n resistances determined by the cross-section of the conductor, and specific electric resistivity of the relevant metallic component of the laminate. Element i of the laminate also represented 1/n portion of the total resistance of the total conductor length. For this reason, the resulting final resistance was equal to the parallel combination of resistances of the individual metallic components. The final relation applicable for the calculation of specific electric resistivity of the laminated conductor was then depicted via Equation (5), where ρ is the specific electric resistivity (Ωm), A is the area (m 2 ), Cu and Al are indices to characterize the individual component metals, and lam is the index to characterize the laminated rod.
(Sub)structure Development
Substructure developments within peripheries of the Cu lamellas of all the swaged (and heat treated) laminated rods were observed in detail via TEM. Swaging down to the diameter of 15 mm imparted the accumulation of structural defects and substructure formation. Accumulated dislocations forming dislocation cells within the swaged laminate are depicted in Figure 5a, while Figure 5b shows, in detail, evident deformations of grains (horizontal elongation, i.e., elongation in the axial direction of the laminate, imparted by the effect of axial component of the swaging force) and the formation of subgrains within. Heat treatment via regime HT1 did not impart substantial restoration, as the structure still contained a high density of dislocations ( Figure 5c depicts the interior of grain with accumulated dislocations). However, some grains already exhibited a decrease in dislocation density and the tendency to recover (Figure 5d). Finally, the HT2 heat treatment imparted the structure recovery and annihilation of dislocations; the HT2-treated laminate still contained deformed grains, but also dislocation-free grains. Figure 5e depicts both the mentioned structural features and also documents the presence of the bimodal structure distribution within the HT2-treated rod. The HT2 treatment also introduced the formation of twins. Figure 5f shows the detail of a restored grain featuring an annealing twin, of which the diffraction pattern is then shown in Figure 5g. component of the laminate. Element i of the laminate also represented 1/n portion of the total resistance of the total conductor length. For this reason, the resulting final resistance was equal to the parallel combination of resistances of the individual metallic components. The final relation applicable for the calculation of specific electric resistivity of the laminated conductor was then depicted via Equation (5), where ρ is the specific electric resistivity (Ωm), A is the area (m 2 ), Cu and Al are indices to characterize the individual component metals, and lam is the index to characterize the laminated rod.
(Sub)structure Development
Substructure developments within peripheries of the Cu lamellas of all the swaged (and heat treated) laminated rods were observed in detail via TEM. Swaging down to the diameter of 15 mm imparted the accumulation of structural defects and substructure formation. Accumulated dislocations forming dislocation cells within the swaged laminate are depicted in Figure 5a, while Figure 5b shows, in detail, evident deformations of grains (horizontal elongation, i.e., elongation in the axial direction of the laminate, imparted by the effect of axial component of the swaging force) and the formation of subgrains within. Heat treatment via regime HT1 did not impart substantial restoration, as the structure still contained a high density of dislocations ( Figure 5c depicts the interior of grain with accumulated dislocations). However, some grains already exhibited a decrease in dislocation density and the tendency to recover (Figure 5d). Finally, the HT2 heat treatment imparted the structure recovery and annihilation of dislocations; the HT2-treated laminate still contained deformed grains, but also dislocation-free grains. Figure 5e depicts both the mentioned structural features and also documents the presence of the bimodal structure distribution within the HT2-treated rod. The HT2 treatment also introduced the formation of twins. Figure 5f shows the detail of a restored grain featuring an annealing twin, of which the diffraction pattern is then shown in Figure 5g. The results shown above of TEM observations supplemented the SEM analyses, which also showed heavily deformed grains within the Cu lamellas of the swaged laminate featuring the preferential orientations of the <001> and <111> || swaging direction (SD) (see Figure 6a depicting the orientation image map, OIM, for a Cu lamella of the swaged rod). Similar results of grains' orientations were acquired for the HT1 laminate (see the OIM in Figure 6b). However, this sample already exhibited a few relatively small, recrystallized grains with random preferential orientations (small, recrystallized grains diverging from the <001> and <111> || SD preferential orientations can be seen at the boundaries of larger original grains in Figure 6b). The Cu lamellas of the heat-treated lam- The results shown above of TEM observations supplemented the SEM analyses, which also showed heavily deformed grains within the Cu lamellas of the swaged laminate featuring the preferential orientations of the <001> and <111> || swaging direction (SD) (see Figure 6a depicting the orientation image map, OIM, for a Cu lamella of the swaged rod). Similar results of grains' orientations were acquired for the HT1 laminate (see the OIM in Figure 6b). However, this sample already exhibited a few relatively small, recrystallized grains with random preferential orientations (small, recrystallized grains diverging from the <001> and <111> || SD preferential orientations can be seen at the boundaries of larger original grains in Figure 6b). The Cu lamellas of the heat-treated laminate via regime HT2 then exhibited an evident presence of recrystallized, randomly oriented grains at the boundaries of the original grains (see Figure 6c). The HT2 sample also exhibited a bimodal grain size distribution and annealing twins (see Figure 6d depicting a larger area-compared to Figure 5f-featuring the bimodal grain size distribution and twins). a larger area-compared to Figure 5f-featuring the bimodal grain size distribution and twins).
The SEM OIM images for the Al matrices of the swaged, HT1-, and HT2-laminated rods are depicted in Figure 7a-c, respectively. Similar to the further discussed grain size analyses (Section 3.2), the analyses of Al grains' orientations did not reveal any significant differences between the individual material states. As the Al matrix most probably exhibited dynamic recrystallization during swaging to the final laminate diameter of 15 mm, all the Al components within the three examined laminates exhibited more or less equiaxed fine grains featuring comparable portions of the <001> || SD and <111> || SD preferential orientations. The portion of randomly oriented recrystallized grains within the Al components were generally higher than within the Cu components for all the investigated swaged (and heat treated) laminates. The SEM OIM images for the Al matrices of the swaged, HT1-, and HT2-laminated rods are depicted in Figure 7a-c, respectively. Similar to the further discussed grain size analyses (Section 3.2), the analyses of Al grains' orientations did not reveal any significant differences between the individual material states. As the Al matrix most probably exhibited dynamic recrystallization during swaging to the final laminate diameter of 15 mm, all the Al components within the three examined laminates exhibited more or less equiaxed fine grains featuring comparable portions of the <001> || SD and <111> || SD preferential orientations. The portion of randomly oriented recrystallized grains within the Al components were generally higher than within the Cu components for all the investigated swaged (and heat treated) laminates. inate via regime HT2 then exhibited an evident presence of recrystallized, randomly oriented grains at the boundaries of the original grains (see Figure 6c). The HT2 sample also exhibited a bimodal grain size distribution and annealing twins (see Figure 6d depicting a larger area-compared to Figure 5f-featuring the bimodal grain size distribution and twins). The SEM OIM images for the Al matrices of the swaged, HT1-, and HT2-laminated rods are depicted in Figure 7a-c, respectively. Similar to the further discussed grain size analyses (Section 3.2), the analyses of Al grains' orientations did not reveal any significant differences between the individual material states. As the Al matrix most probably exhibited dynamic recrystallization during swaging to the final laminate diameter of 15 mm, all the Al components within the three examined laminates exhibited more or less equiaxed fine grains featuring comparable portions of the <001> || SD and <111> || SD preferential orientations. The portion of randomly oriented recrystallized grains within the Al components were generally higher than within the Cu components for all the investigated swaged (and heat treated) laminates.
Grain Size
The effects of the intensive plastic deformation on grain sizes of the components of the laminate were assessed, too. The grain sizes were evaluated using the maximum Feret diameter parameter, which is defined as the largest distance between two points defining an individual grain [36]. The average grain sizes for the components of the laminated rods were calculated from the overall numbers of grains present at the respective scanned regions. The graphical depictions of grain sizes were presented as the area-weighted fraction distributions, which depict sums of areas occupied by the grains with the respective diameter, i.e., total areas occupied by the grains at the scanned region, instead of the total number of grains with the respective size [37].
The average grain sizes (in µm) for the original Cu and Al were 36.9 µm and 64.0 µm, respectively. The average grain sizes within the swaged laminate were 3.1 µm for the Cu and 3.0 µm for the Al components. The HT1 treatment imparted grain growth (especially for the Cu components), as the average grain sizes for the HT1 laminate were 4.8 µm for Cu and 3.2 µm for Al components. Compared to the swaged state, the standard deviations from the average grain sizes also increased for both the metals, which confirmed the presence of a bimodal grain size distribution (as documented also in Section 3.1). Finally, the HT2 treatment resulted in a slight decrease in the average grain sizes, as their values for the HT2 laminate decreased to 4.2 µm for the Cu and 2.8 µm for the Al components. This fact could be attributed to the abovementioned increased fractions of small, recrystallized grains for this laminate; also, the standard deviations decreased again compared to the HT1 laminate.
The grain size distributions for the original CP Cu and CP Al are depicted in Figure 8a,b, respectively. For the laminates, the analyses were performed at the peripheries of the swaged (and heat treated) rods, since the effect of the deformation processing was the highest in these regions [38]. Figure 8c,d depicts the area-weighted grain size distributions for the Cu lamella and Al matrix, respectively, of the swaged laminated rod. Figure 8e,f then shows the area-weighted grain size distributions for the Cu lamella and Al matrix, respectively, of the swaged laminated rod heat treated via regime HT1. Finally, Figure 8g,h depicts the area-weighted grain size distributions for the Cu lamella and Al matrix, respectively, of the swaged laminated rod heat treated via regime HT2.
Grain Size
The effects of the intensive plastic deformation on grain sizes of the components of the laminate were assessed, too. The grain sizes were evaluated using the maximum Feret diameter parameter, which is defined as the largest distance between two points defining an individual grain [36]. The average grain sizes for the components of the laminated rods were calculated from the overall numbers of grains present at the respective scanned regions. The graphical depictions of grain sizes were presented as the area-weighted fraction distributions, which depict sums of areas occupied by the grains with the respective diameter, i.e., total areas occupied by the grains at the scanned region, instead of the total number of grains with the respective size [37].
The average grain sizes (in µm) for the original Cu and Al were 36.9 µm and 64.0 µm, respectively. The average grain sizes within the swaged laminate were 3.1 µm for the Cu and 3.0 µm for the Al components. The HT1 treatment imparted grain growth (especially for the Cu components), as the average grain sizes for the HT1 laminate were 4.8 µm for Cu and 3.2 µm for Al components. Compared to the swaged state, the standard deviations from the average grain sizes also increased for both the metals, which confirmed the presence of a bimodal grain size distribution (as documented also in Section 3.1). Finally, the HT2 treatment resulted in a slight decrease in the average grain sizes, as their values for the HT2 laminate decreased to 4.2 µm for the Cu and 2.8 µm for the Al components. This fact could be attributed to the abovementioned increased fractions of small, recrystallized grains for this laminate; also, the standard deviations decreased again compared to the HT1 laminate.
The grain size distributions for the original CP Cu and CP Al are depicted in Figure 8a,b, respectively. For the laminates, the analyses were performed at the peripheries of the swaged (and heat treated) rods, since the effect of the deformation processing was the highest in these regions [38]. Figure 8c,d depicts the area-weighted grain size distributions for the Cu lamella and Al matrix, respectively, of the swaged laminated rod. Figure 8e,f then shows the area-weighted grain size distributions for the Cu lamella and Al matrix, respectively, of the swaged laminated rod heat treated via regime HT1. Finally, Figure 8g,h depicts the area-weighted grain size distributions for the Cu lamella and Al matrix, respectively, of the swaged laminated rod heat treated via regime HT2.
Microhardness
As evident from Figure 9, the highest Vickers microhardness value of 115 HV was measured for the Cu lamellas of the swaged laminated rod. The microhardness of the Cu lamellas then decreased as a result of the effect of structure softening introduced by the post-process heat treatments; the lowest average microhardness value measured for Cu
Microhardness
As evident from Figure 9, the highest Vickers microhardness value of 115 HV was measured for the Cu lamellas of the swaged laminated rod. The microhardness of the Cu lamellas then decreased as a result of the effect of structure softening introduced by the post-process heat treatments; the lowest average microhardness value measured for Cu lamellas (76.5 HV) was detected for the lamellas of the HT2 rod. As regards the Al matrices, the differences between the microhardness values of the individual laminated rods were not as significant as for the Cu lamellas as the Al exhibited more significant dynamic recrystallization during swaging (documented in Section 3.1). The average HV values for the Al components of the swaged and HT1 rods were comparable (41 HV and 40.3 HV, respectively). The HV value then slightly decreased to 32.3 HV for the Al matrix of the HT2 rod.
Materials 2022, 14, x FOR PEER REVIEW 12 of 17 lamellas (76.5 HV) was detected for the lamellas of the HT2 rod. As regards the Al matrices, the differences between the microhardness values of the individual laminated rods were not as significant as for the Cu lamellas as the Al exhibited more significant dynamic recrystallization during swaging (documented in Section 3.1). The average HV values for the Al components of the swaged and HT1 rods were comparable (41 HV and 40.3 HV, respectively). The HV value then slightly decreased to 32.3 HV for the Al matrix of the HT2 rod.
Electric Properties
As regards the electric characteristics of the swaged (and heat treated) laminates, the results of the numerical simulations were evaluated at first. The basic output of the harmonic analysis was the distribution of current density across the cross-sections of the swaged laminates, which is depicted in Figure 10 for the laminates swaged to the diameters of 20 mm, 15 mm, and 10 mm (the geometrical layout of the individual metallic components across the laminates' cross-sections corresponded to the real geometries of the laminates observed by SEM). As evident from the figure, the current density was distributed homogeneously across the cross-sectional areas of all the examined laminates. Higher current density values were observed in the axial Cu cores, as well as in the peripheral Cu lamellas. In other words, generally lower current density values were observed for the Al matrices, whereas higher values were observed for the Cu lamellas.
Electric Properties
As regards the electric characteristics of the swaged (and heat treated) laminates, the results of the numerical simulations were evaluated at first. The basic output of the harmonic analysis was the distribution of current density across the cross-sections of the swaged laminates, which is depicted in Figure 10 for the laminates swaged to the diameters of 20 mm, 15 mm, and 10 mm (the geometrical layout of the individual metallic components across the laminates' cross-sections corresponded to the real geometries of the laminates observed by SEM). As evident from the figure, the current density was distributed homogeneously across the cross-sectional areas of all the examined laminates. Higher current density values were observed in the axial Cu cores, as well as in the peripheral Cu lamellas. In other words, generally lower current density values were observed for the Al matrices, whereas higher values were observed for the Cu lamellas.
Materials 2022, 14, x FOR PEER REVIEW 12 of 17 lamellas (76.5 HV) was detected for the lamellas of the HT2 rod. As regards the Al matrices, the differences between the microhardness values of the individual laminated rods were not as significant as for the Cu lamellas as the Al exhibited more significant dynamic recrystallization during swaging (documented in Section 3.1). The average HV values for the Al components of the swaged and HT1 rods were comparable (41 HV and 40.3 HV, respectively). The HV value then slightly decreased to 32.3 HV for the Al matrix of the HT2 rod.
Electric Properties
As regards the electric characteristics of the swaged (and heat treated) laminates, the results of the numerical simulations were evaluated at first. The basic output of the harmonic analysis was the distribution of current density across the cross-sections of the swaged laminates, which is depicted in Figure 10 for the laminates swaged to the diameters of 20 mm, 15 mm, and 10 mm (the geometrical layout of the individual metallic components across the laminates' cross-sections corresponded to the real geometries of the laminates observed by SEM). As evident from the figure, the current density was distributed homogeneously across the cross-sectional areas of all the examined laminates. Higher current density values were observed in the axial Cu cores, as well as in the peripheral Cu lamellas. In other words, generally lower current density values were observed for the Al matrices, whereas higher values were observed for the Cu lamellas. The methods by which the electric characteristics of the laminates were further experimentally measured and calculated were described in Section 2, in which the results of measurements of electric resistivity for the CP metals were also depicted. Expressed in numbers, the specific resistivity and resistance, respectively, were 17.468 Ωm × 10 −9 and 225.1 Ω × 10 −6 for Cu, and 28.772 Ωm × 10 −9 and 441.3 Ω × 10 −6 for Al. During evaluations of the electric properties of the laminates, these values of the original metals were considered. Figure 11 depicts the dependence of the voltage drop V 1 on the applied electric current for the swaged (and heat treated) laminated rods. Similar to the original CP metals, the data for the laminates were smoothed by linear fits. The figure thus depicts that the values of the voltage drop exhibited a linear increase with increasing electric current applied for all the examined material states. The slopes of the curves again characterized the resistances R of the laminated rods. As can be seen from the figure, the resistances of the swaged laminate and laminate heat treated via regime HT2 were comparable. However, the laminate heat treated via regime HT1 exhibited significantly increased resistance. To enable a comparison of resistances of the produced laminates, Figure 11 also includes the curves acquired experimentally for CP Al and CP Cu rods, i.e., rods of the original metallic components swaged to a diameter of 15 mm. Evidently, the electric conductivity of the swaged laminate was superior to that of CP Al but deteriorated when compared to CP Cu. Similar results were also acquired for the HT2 laminate. The HT1 laminate exhibited deteriorated electric conductivity when compared to both the original metals in swaged states. The methods by which the electric characteristics of the laminates were further experimentally measured and calculated were described in Section 2, in which the results of measurements of electric resistivity for the CP metals were also depicted. Expressed in numbers, the specific resistivity and resistance, respectively, were 17.468 Ωm × 10 −9 and 225.1 Ω × 10 −6 for Cu, and 28.772 Ωm × 10 −9 and 441.3 Ω × 10 −6 for Al. During evaluations of the electric properties of the laminates, these values of the original metals were considered. Figure 11 depicts the dependence of the voltage drop V1 on the applied electric current for the swaged (and heat treated) laminated rods. Similar to the original CP metals, the data for the laminates were smoothed by linear fits. The figure thus depicts that the values of the voltage drop exhibited a linear increase with increasing electric current applied for all the examined material states. The slopes of the curves again characterized the resistances R of the laminated rods. As can be seen from the figure, the resistances of the swaged laminate and laminate heat treated via regime HT2 were comparable. However, the laminate heat treated via regime HT1 exhibited significantly increased resistance. To enable a comparison of resistances of the produced laminates, Figure 11 also includes the curves acquired experimentally for CP Al and CP Cu rods, i.e., rods of the original metallic components swaged to a diameter of 15 mm. Evidently, the electric conductivity of the swaged laminate was superior to that of CP Al but deteriorated when compared to CP Cu. Similar results were also acquired for the HT2 laminate. The HT1 laminate exhibited deteriorated electric conductivity when compared to both the original metals in swaged states.
Discussion
The assembled laminated semi-products with an original diameter of 50 mm were gradually swaged to laminated rods with a diameter of 15 mm, which not only affected the structures of the laminates, but also their electric properties. Swaging with a total reduction ratio of 2.4 resulted in deformation strengthening and significant grain refinement, pointing to the occurrence of dynamic recrystallization during processing.
Generally, the grains within the Cu lamellas were larger than the grains within the Al matrices for all the swaged and heat-treated laminated rods. The primary reason for this phenomenon was that deformation strengthening was processed differently within the metallic components, which was given by the differences in their intrinsic properties and lattice parameters (the Al matrix tended to consume the imposed shear strain more easily than Cu) [31]. The Al matrices exhibited higher portions of recrystallized grains
Discussion
The assembled laminated semi-products with an original diameter of 50 mm were gradually swaged to laminated rods with a diameter of 15 mm, which not only affected the structures of the laminates, but also their electric properties. Swaging with a total reduction ratio of 2.4 resulted in deformation strengthening and significant grain refinement, pointing to the occurrence of dynamic recrystallization during processing.
Generally, the grains within the Cu lamellas were larger than the grains within the Al matrices for all the swaged and heat-treated laminated rods. The primary reason for this phenomenon was that deformation strengthening was processed differently within the metallic components, which was given by the differences in their intrinsic properties and lattice parameters (the Al matrix tended to consume the imposed shear strain more easily than Cu) [31]. The Al matrices exhibited higher portions of recrystallized grains featuring more randomized orientations than the Cu lamellas. This fact, together with the abovementioned grain size results, points to the significant occurrence of dynamic recrystallization within the Al matrix during swaging. The energy imparted by the HT1 treatment performed at 250 • C introduced the slight growth of the grains, rather than further recrystallization. This fact was also confirmed by the grain size measurements, as the average grain sizes of both the Al and Cu components increased slightly after the HT1 treatment (compared to the swaged laminate). Nevertheless, increasing the annealing temperature to 350 • C imparted sufficient energy for the grains to exhibit the annihilation of dislocations (confirmed by TEM) and partial recrystallization, which was proven not only by the grain size analyses, but also by the presence of fine grains featuring randomized orientations (i.e., deviating from the <001> || SD and <111> || SD preferential orientations) within the Al matrix and Cu lamellas of the HT2-laminated rod.
The observed grain size changes were non-negligibly connected to the substructure development and changes in microhardness of the metallic components, as these phenomena were introduced by the imposed shear strain, i.e., deformation strengthening. Compared to the swaged state, microhardness decreased for the HT2 laminate, especially due to the observed decrease in dislocation density, which resulted in structure softening. The structural phenomena occurring also introduced changes in the geometrical distribution of the metallic components across the cross-section of the laminated rod. As documented by the results of numerical simulations, the current density, as well as the consequent electric characteristics of the laminate, were not only affected by the area fractions of the metallic components across the cross-section of the laminate, but also by the electric conductivities of both the metallic components. The material featuring a lower electric resistivity, i.e., higher conductivity, had a higher tendency to transfer the electric current despite the fact that its cross-sectional area fraction across the transversal cut through the laminate was lower. The higher current density of the respective metallic component caused the electric current to flow primarily through that part of the laminated conductor featuring a lower value of specific electric resistivity. Moreover, from the viewpoint of the transfer of electric current through the laminated conductor, it was not important whether the lengths of the individual material components were mutually insulated, or closely connected, or whether they exhibited certain transitional resistance. Figure 11 characterizing the experimentally observed electric behaviours of the laminated rods showed that all the swaged and heat-treated laminates exhibited deteriorated electric conductivity when compared to a swaged Cu rod. This fact was related to the above-discussed structural phenomena, i.e., to deformation strengthening of both the metallic components ensuing from grain size decrease, substructure development, and increase in dislocation density introduced by the intensive plastic deformation [39]. Interestingly, the values of voltage drop V 1 measured for the commercially available electro-conductive CP Cu and CP Al components were different than those calculated for swaged CP Cu and CP Al rods (linearly increasing trend was considered). In other words, the V 1 value at the maximum current of 100 A was V 1 = 44 mV for Al and V 1 = 22 mV for Cu (see Figure 2). Nevertheless, for rods swaged to the diameter of 15 mm, the theoretically calculated V 1 values are V 1 = 26.5 mV for CP Al and V 1 = 14.6 mV for CP Cu. However, the real values measured for the 15 mm rods swaged within this study were V 1 = 16 mV for CP Al and V 1 = 10 mV for CP Cu (see Figure 11). This fact confirmed that swaging imparted significant positive changes in the microstructures of both the metallic components (compared to original annealed CP states). The imposed strain imparting the substructure development thus favourably affected electric conductivity, i.e., resulted in an electric resistivity decrease. Generally, the presence of annealing twins, which is typically significant in rolled/drawn and annealed Cu pieces, contributes to an electric resistivity increase [39]. This was most likely the primary reason for the higher resistivity of the original annealed CP Cu (compared to the swaged CP Cu rod). On the other hand, the larger grain size, which is typically observed within annealed non-deformed metals, contributes to higher electric conductivity given by the relatively low volume of grain boundaries, which act as barriers during the electric current transfer (similar to other structural features, which can possibly be present in deformed structures, such as dislocation cells and subgrains). Partial structure restoration, i.e., the presence of small, recrystallized grains with randomized preferential orientations within the structure (primarily the Al matrix), imparted an increase in the electric resistivity. In other words, partial reorientations of subgrains, i.e., grains defined by low-angle grain boundaries newly developing within the original grains [40], introduced by the heat treatment at the lower temperature (HT1) resulted in increased electric resistivity compared to the swaged material state. On the contrary, heat treatment at the higher temperature (HT2) imparted the annihilation of dislocations, the development of twins, and a relatively large fraction of recrystallized grains featuring randomized orientations. These phenomena again resulted in decreased electric resistivity-its value for the HT2 laminate was comparable to that of the swaged laminate. In summary, structural phenomena, such as deformation twins, can decrease electric conductivity (particularly for pure metals). However, their presence in deformed laminates is not as significant from the viewpoint of increasing electric resistivity. On the other hand, heterogeneity in the orientations of (sub)grains can be a much more substantial factor influencing the electric conductivity of the laminate.
Conclusions
This paper aimed to characterize the electric properties and structural features of Al/Cu-laminated rods, which were successfully prepared via combinations of roomtemperature rotary swaging and post-process heat treatments. The assembled composites 50 mm in diameter were progressively swaged down to laminated rods with a diameter of 15 mm, and subsequently subjected to two heat treatment regimes (250 • C = HT1, and 350 • C = HT2). The main acquired results were the following:
•
Swaging to the diameter of 15 mm imparted favourable electric characteristics and an increase in microhardness up to 115 HV for the Cu components of the laminate. • HT1 material state-the occurrence of relatively small, restored grains with randomized preferential orientations was the main reason for the observed deterioration of electric conductivity (specific electric resistivity of approx. 34 × 10 −9 Ωm compared to approximately 23.5 × 10 −9 Ωm for the swaged state). • HT2 material state-the electric conductivity improved to a level comparable with the swaged state (approximately 22.7 × 10 −9 Ωm); in addition, the higher annealing temperature caused bimodal grain size distribution and the development of annealing twins. • A favourable effect of rotary swaging on the electric conductivity was also observed for the original Al and Cu (theoretically calculated voltage drop values were higher than experimentally measured for both)-this phenomenon was related to substructure development (substantial grain refinement and high dislocation density) imparted by swaging.
The presented results proved that the Al/Cu laminate of the proposed stacking sequence is suitable for the production of conductors for direct current transfer. Although there are differences between the characteristics of alternate and direct currents (primarily the occurrence of the skin effect during alternate current transfer), the Al/Cu laminate is supposed to also be suitable for alternate current transfer; the detailed investigation of the behaviour of the Al/Cu laminate during alternate current transfer is the primary focus of our ongoing research.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The original data supporting the research are not publicly available but the data that are not confidential are available on request from the corresponding author. | 11,967.2 | 2022-01-27T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Conformal Vector Fields and Null Hypersurfaces
We give conditions for a conformal vector field to be tangent to a null hypersurface. We particularize to two important cases: a Killing vector field and a closed and conformal vector field. In the first case, we obtain a result ensuring that a null hypersurface is a Killing horizon. In the second one, the vector field gives rise to a foliation of the manifold by totally umbilical hypersurfaces with constant mean curvature which can be spacelike, timelike or null. We prove several results which ensure that a null hypersurface with constant null mean curvature is a leaf of this foliation.
Introduction
Generalized Robertson-Walker spaces (I × F, −dt 2 + f (t) 2 g 0 ), where I ⊂ R, (F, g 0 ) is a Riemannian manifold and f ∈ C ∞ (I) is a positive function, are of great importance since they include the classical cosmological models and they have been widely studied from different points of view. For example, in [1,2,16,17] the authors gave sufficient conditions for a constant mean curvature spacelike hypersurface to be a slice {t}×F . Observe that these slices are totally umbilical hypersurfaces and they are the orthogonal leaves of the foliation induced by the timelike, closed and conformal vector field K = f∂t. In fact, a generalized Robertson-Walker space is locally characterized by the existence of a vector field with these properties. This is why in [5] the authors considered directly a Lorentzian manifold furnished with such a vector field, which is A rigging may not exist (globally) for an arbitrary null hypersurface, but locally its existence is guaranteed. A rigging gives rise in a natural way to a null vector field ξ ∈ X(L) with Rad(T p L) = span(ξ p ) and g(ζ, ξ) = 1 called rigged vector field, a spacelike distribution S on L called the screen distribution and a null vector field transverse to L given by Moreover, we have g(N, ξ) = 1, N ⊥ S and So, for each p ∈ L we can consider P T L : T p M → T p L and P S : T p M → S p the projections induced by the above decompositions. The tensors where U, V ∈ X(L) and X ∈ Γ (S), are called second fundamental form, screen second fundamental form and rotation one-form respectively. According to decompositions (2) and (3) we have where ∇ L U V ∈ X(L) and A(U ), A * (U ), ∇ * U X ∈ Γ (S). B is a symmetric tensor which holds B(U, ξ) = 0 (8) (therefore ξ is a pre-geodesic vector field) and C holds for all X, Y ∈ Γ (S), where ω is the rigged one-form given by ω(U ) = g(ζ, U ) for all U ∈ X(L). The null mean curvature of the null hypersurface is the trace of A * , namely where {e 1 , . . . , e n−2 } is an orthonormal basis in S p . Recall that although the tensor B depends on the chosen rigging (more concretely, on the rigged vector field), some conditions about the null hypersurface as being totally umbilical (B = H n−2 g), being totally geodesic (B = 0) or having zero null mean curvature are independent on any choice. From equation (8) it follows that B is always degenerate, but it also implies that having a null second fundamental form with the property B(v, v) = 0 for all v ∈ S with v = 0, is also independent on the chosen rigging. We call this a screen non-degenerate second fundamental form.
Some equations linking the above tensors are and the Gauss-Codazzi equation Recall also the important Raychaudhuri equation : From this we can deduce that if ξ is geodesic, H is constant and Ric(ξ, ξ) ≥ 0 then L is totally geodesic. Another basic curvature relation is where {e 1 , . . . , e n−2 } is an orthonormal basis of S p and v ∈ T p M . The rigged metric is a Riemannian metric on the null hypersurface L defined by This metric declares ξ g-unitary and g-orthogonal to S, therefore ∇ ξ ξ ∈ Γ (S). It can be used as an auxiliary tool and its usefulness have been shown in several papers, [3,10,11,18]. An important relation between the Levi-Civita connection ∇ of g and the Levi-Civita connection ∇ of g is for all U, V, W ∈ X(L). In particular, for all X, Y, Z ∈ Γ (S) it holds In the case of being ω closed, dω = 0, we have that the screen distribution is integrable and we can give an easier relation between ∇ and ∇ given by for all U, V, W ∈ X(L), [13,Proposition 3.15].
We say that a rigging is distinguished if the induced rotation one-form τ vanishes. On the other hand, we say that it is screen conformal if there is a function ϕ ∈ C ∞ (L) such that C = ϕB. In this case, C is symmetric and thus from Eq. (10) we have that S is integrable. Moreover, we have the following. Lemma 1. [10,18] Let L be a null hypersurface and ζ a rigging for it.
If ζ is distinguished and screen conformal then the rigged one-form is Proof. Observe that ω is the g-metrically equivalent one-form to ξ, so it holds for all U, V ∈ X(L). In particular, dω = 0 if and only if S is integrable and ξ is g-geodesic. Now, from Eqs. (11) and (18) we have and we get the first point. The second and third points follow immediately from the first one.
In the following lemma we relate the laplacian of a function defined in M and the laplacian with respect to g of its restriction to L, obtaining an analogous formula to [7,Formula 4]. In the case of a closed rigging, we also relate the laplacian with respect to g of a function defined in L and the laplacian of its restriction to a leaf of the screen computed with respect to the induced metric from the ambient. 1.
If we call f = f • i and take X, Y ∈ Γ (S), then 3. If dω = 0, φ ∈ C ∞ (L), S is a leaf of the screen distribution S and j : S → L is the canonical inclusion, then where S is the laplacian computed in the induced metric on the leaf S.
Proof. If we decompose ∇f = P S (∇f ) + g(∇f, N )ξ + g(∇f, ξ)N according to decompositions (2) and (3), then it is straightforward to check that Thus, using Eq. (16), we have On the other hand, so using Eq. (15) and Lemma 1 we have For the third formula of item (1) just note that where ∇ S is the gradient in the induced metric on S. Now, using again Eqs. (16) and (17) Since dω = 0, from Lemma 1 we get ∇ ξ ξ = 0 and we obtain the desired formula.
The fundamental tensors of a null hypersurface (B, C and τ ) depend on the chosen rigging. However, if we change the rigging, then we can express the new tensors in terms of the old ones. For our purpose, we only consider a very special rigging change.
Lemma 3. [18]
If ζ is a rigging for a null hypersurface L and Φ ∈ C ∞ (L) is a never vanishing function, then ζ = Φζ is also a rigging for L and
Conformal Vector Fields
If ρ = 0, then it is called a Killing vector field. If we call η the metrically equivalent one-form to K, then ( for all U ∈ X(M ), where ϕ is characterized by dη(U, V ) = 2g(ϕ(U ), V ). If ϕ = 0, then η is closed and K is called closed and conformal. If ρ = 0 and ϕ = 0, then K is called a parallel vector field. It is immediate that ϕ is skew-symmetric and so ∇ U ϕ is also. Moreover, and since dη is closed, it also holds In the following lemma we give some basic facts about conformal vector fields. From now on, we call λ = g(K, K).
Proof. We get the first point taking derivative in λ = g(K, K) and using Eq. (19). For the second one, taking divergence For the third point and if X ⊥ K, then If we take derivative in Eq. (20), then and thus λX( The last point is straightforward. Under some suitable conditions, a conformal vector field is parallel, as the following lemma shows. Proof. The first point follows from Eq. (20). For the second point, observe that for all U ∈ X(M ), which implies that ρ = 0.
In the third case, the function λ = g(K, K) has a maximum at p. Using Lemma 4 we have so by the maximum principle we have that λ = 0. Therefore, K is parallel because it has constant length.
If we drop the condition Ric(K, K) ≤ 0 in the third point of the above lemma, then we can not conclude that K is parallel. In fact, the causality of a closed and conformal vector field can be arbitrary, as the following example shows.
Riemannian or Lorentzian manifold and ε = ±1. The vector field K = f∂t in the warped product is closed and conformal. If ε = 1, then K is spacelike at every point and if ε = −1, then K is timelike at every point.
On the other hand, the position vector field K = n i=1 x i ∂x i in the Minkowski space is closed, conformal and its causal character changes pointwise.
From [6], we can also construct an example of a causal, closed and conformal vector field which is null at some point. Take E(v) an arbitrary function and consider the Lorentzian surface (M, g) = (R 2 , E(v)du 2 + 2dudv). The vector field K = ∂u − E(v)∂v holds ∇ U K = − Ev 2 U for all U ∈ X(M ) and therefore it is closed and conformal. Since g(K, K) = −E(v), for a suitable choice of E(v) we get the desired example.
Suppose now that L is a null hypersurface with a rigging ζ and write (2) and (3). We need to compute the laplacian of μ and ν with respect to g, but since they are functions defined only on L we can not use Lemma 2. We begin computing the gradient with respect to g. Proof. Using Eqs. (5), (6) and (19) we have If we multiply by N and ξ and take into account (1), (7) and (9) we get the first and second equation. Using the projection P S we obtain the third one.
Observe that being ϕ skew-symmetric, we have that ϕ(ξ) ∈ X(L). Next, we compute the divergence with respect to g of P S (ϕ(ξ)) and A * (P S (K)) Proposition 1. If K ∈ X(M ) is a conformal vector field and L a null hypersurface with rigging ζ, then Proof. If {e 1 , . . . , e n−2 } is an orthonormal basis of S at a point p, then using Eq. (16) we have We can suppose that e i are eigenvectors of A * , so τ (e i )g(ϕ(ξ), e i ) = τ (P S (ϕ(ξ))), from Lemma 1 we get divP S (ϕ(ξ)) = On the other hand, since {e 1 , . . . , e n−2 , ξ+N √ 2 , ξ−N √ 2 } is an orthonormal basis, then and therefore If we replace (23) in the expression (22), then we obtain the first formula.
For the second one, using again Eq. (16), the Gauss-Codazzi Eq. (12) and the formula (21) of Lemma 6 we have Now, we compute the term For this, we can suppose that {e 1 , . . . , e n−2 } is a basis of eigenvectors of A * . Extend them to an orthonormal basis {E 1 , . . . , E n−2 } locally defined in a neighbourhood of p such that E i ∈ Γ (S) and E i (p) = e i . Then On the other hand, using Eq. (14) and Lemma 4, (span(ξ, N )).
Taking into account equations (13) and (23) we get the second formula. Now, we can give a result ensuring that a conformal vector field is tangent to a null hypersurface. Theorem 1. Let K ∈ X(M ) be a conformal vector field with constant conformal factor ρ. Suppose that L is a null hypersurface with zero null mean curvature and ζ is a rigging for L such that
K(span(ξ, N )) ≤ Ric(N, ξ). If g(K, ξ) is signed and there is a point
Proof. We can suppose that there is a positive function f defined in a neighbourhood θ ⊂ L of p such that τ = d ln f . From Lemma 3 we have that for the restricted rigging ζ = 1 f ζ the associated rotation one-form vanishes and all the hypotheses in the theorem remain true. Moreover, we can suppose that μ = g(K, ξ) is non-positive changing the sign of the rigging if necessary. Now, Lemma 6 gives us that ∇μ = −A * (K 0 ) − P S (ϕ(ξ)) and applying Proposition 1 we have Since μ has a local maximum at p, then μ vanishes in θ. By connectedness, μ vanishes on L and so K is tangent to L.
A null hypersurface L is called a Killing horizon if there is a Killing vector field K ∈ X(M ) such that K x = ν(x)ξ x for all x ∈ L, where ν ∈ C ∞ (L) is a never vanishing function. In this case, L is necessarily totally geodesic, since B(U, V ) = −g(∇ U ξ, V ) would be symmetric and skew-symmetric. The following corollaries gives us conditions for a null hypersurface to be a Killing horizon.
K(span(ξ, N )) ≤ Ric(N, ξ).
If K x is causal for all x ∈ L and there is a point p ∈ L with K p ∈ T p L (and therefore K p is null), then L is totally geodesic and K x = ν(x)ξ x for all x ∈ L and certain ν ∈ C ∞ (L). where g 0 is the standard metric in S 2 , [20]. The totally geodesic null hypersurface is a Killing horizon for the Killing vector field K = v∂v − u∂u. If we take the rigging ζ = ∂u, then the rigged vector field is ξ = 1 F ∂v and the null transverse vector field is N = ζ. Through L it holds r = 2 , so a direct computation shows that τ = 0, C = − v 2 g and in particular C(ξ, X) = 0 for all X ∈ Γ (S). Clearly, it also holds K(span(ξ, N )) ≤ Ric(N, ξ) since both vanish.
Corollary 2.
Let K ∈ X(M ) be a conformal vector field with constant conformal factor ρ. Suppose that L is a totally geodesic null hypersurface and ζ is a rigging for L such that 1. dτ = 0. 2. C(ξ, X) = 0 for all X ∈ Γ (S).
K(span(ξ, N )) ≤ Ric(N, ξ).
If K x is causal for all x ∈ L and there is a point p ∈ L with K p ∈ T p L (and therefore K p is null), then K is a Killing vector field and K x = ν(x)ξ x for all x ∈ L and certain ν ∈ C ∞ (L).
Proof. Applying Theorem 1 we have K x = ν(x)ξ x for all x ∈ L, but since L is totally geodesic, then necessarily ρ = 0.
Remark 1.
Suppose that L is a Killing horizon for a Killing vector field K ∈ X(M ). If we fix a rigging, then K x = ν(x)ξ x for all x ∈ L, so through L we have ∇ ξ K = fK where f = ξ(ν) − ντ (ξ). If f (x) = 0 for some x ∈ L, then the causal character of K changes from spacelike to timelike in a neighborhood of The existence of a timelike gradient vector field is incompatible with the existence of compact null hypersurfaces, [13]. We can also give an obstruction in the case of a conformal timelike vector field. Theorem 2. Let K ∈ X(M ) be a timelike conformal vector field with constant conformal factor ρ. Suppose that L is a totally geodesic null hypersurface and ζ is a rigging for L such that Then L can not be compact.
Proof. As before, Lemma 1 and Proposition 1 give us which is signed. If L is compact, then μ is a nonzero constant and integrating with respect to g we get which is a contradiction. Example 3. We give an example of a compact totally geodesic null hypersurface where the hypotheses of the above theorem are fulfilled except the condition about the curvature. In the Lorentzian flat torus (T n , g) = S 1 × · · · × S 1 , dx 1 dx 2 + dx 2 3 + · · · + dx 2 n the null hypersurface L = {x ∈ T n : x 2 = p} for a fixed p ∈ S 1 is totally geodesic and ζ = ∂x 2 is a null rigging for it. Since ζ is parallel, we have that τ = 0 and C = 0. On the other hand, K = ∂x 1 − ∂x 2 is a timelike parallel vector field.
Closed and Conformal Vector Fields and Null Hypersurfaces
The orthogonal distribution to a closed vector field is integrable, so it gives rise to a foliation on the manifold. In this case, if K p = 0, we call F p the orthogonal leaf through p ∈ M . The following lemmas show some properties about the leaves.
Observe that in Example 1 we showed a causal, closed and conformal vector field which is null at some points.
On the other hand, under the conditions of the above corollary, we have a totally geodesic null hypersurface F p and a null hypesurface L with zero null mean curvature which are tangent at p, but we can not apply the maximum principle for null hypersurfaces [8, Theorem II.1] because, a priori, we can not ensure that one null hypersurface lies to the future side of the other one.
Remark 2. If we suppose that τ = 0 instead of dτ = 0 in the above corollary, then we can conclude that ν is constant. In fact, by Lemma 8 we have that ρ = 0 along L and thus ∇ U K = 0 for all U ∈ X(L). If we take derivative in K = νξ along U ∈ X(L), then and thus ν is constant.
In a similar way as above we can also prove the following theorem which ensure that a null hypersurface is an orthogonal leaf of a parallel null vector field.
Theorem 4. Let K ∈ X(M ) be a null parallel vector field and L a null hypersurface with rigging ζ such that
-H is constant. Ric(ξ, N ). If there is p ∈ L such that K p ∈ T p L, then K x = νξ x for all x ∈ L and a nonzero constant ν ∈ R and L is a totally geodesic orthogonal leaf of K.
Proof. We can suppose that μ = g(ξ, K) is non-positive and so using Proposition 1 we have μ ≥ 0. Therefore, since K is causal, we have K = νξ for certain μ ∈ C ∞ (L) and L is a totally geodesic orthogonal leaf. We can show as in Remark 2 that ν is necessarily a constant.
Observe that in the above theorem we can not suppose dτ = 0 as in Theorems 1 and 3. In these theorems we can scale the rigging to get τ = 0 and all the hypotheses still hold. In the case of Theorem 4 if we scale the rigging, then we lost the condition H constant.
In the following corollary, observe that if the null mean curvature of a compact null hypersurface is constant, then it is necessarily zero since it holds L H d g = 0, [13]. -H is constant.
-K(span(ξ, N )) ≤ Ric(ξ, N ). Then K x = νξ x for all x ∈ L and a nonzero constant ν ∈ R and L is a totally geodesic orthogonal leaf of K.
Proof. If K p / ∈ T p L for all p ∈ L, then K is a rigging for L, but this is not possible because it is a gradient and L is compact, [13,18]. Thus K p ∈ T p L for some p ∈ L an we can apply the above theorem.
We say that a rigging ζ induces a preferred rigged connection if the Levi-Civita connection induced from the rigged metric g coincides with the induced connection ∇ L . In some sense, a null hypersurface admitting a preferred rigging connection can be handle formally as a nondegenerate one, [18]. The necessary and sufficient conditions for a rigging to induce a preferred rigging connection are τ = 0 and B = C, [4,18,19].
Corollary 5.
Let K be a parallel null vector field, L a null hypersurface and ζ a rigging for it. If H is constant, ζ induces a preferred rigged connection, K(span(ξ, N )) ≤ Ric(ξ, N ) and there is a point p ∈ L with K p ∈ T p L, then K x = νξ x for all x ∈ L and a nonzero constant ν ∈ R and L is a totally geodesic orthogonal leaf of K.
Example 4. Take (M 0 , g 0 ) a Riemannian manifold and consider the plane fronted wave (M, g) = M 0 × R 2 , g 0 + 2dudv + φ(x, u)du 2 . We have that K = ∂ v is a parallel null vector field and the orthogonal leaf through a point p = (x 0 , u 0 , v 0 ) is given by This is a totally geodesic null hypersurface and ζ = ∇v = ∂u − Φ∂v is a rigging for F p with rigged ξ = ∂v. From Eq. (4) we have that τ = 0 and using that g(∇ X ζ, Y ) = 0 for all X, Y ∈ X(M 0 ) and Eq. (11) we also have C = 0. Therefore, ζ induced a preferred rigged connection on F p . Moreover, since ξ = ∂v is parallel, then K(span(ξ, N )) = Ric(ξ, N ) = 0.
Using the above corollary, the orthogonal leaves of K are the unique null hypersurfaces in (M, g) with these properties.
As we said before Theorem 2, the existence of a timelike gradient prevents the existence of compact null hypersurfaces. More general, if the first De Rham cohomology group is trivial, then the existence of a closed rigging is an obtructition for the compactness of the null hypersurface. We give an obstruction for the compactness in the case of a closed (non necessarily a gradient) conformal vector field. Theorem 5. Let K ∈ X(M ) be a closed and conformal vector field and L a null hypersurface. Suppose that K is a rigging for L and one of the following holds. -ρ(x) = 0 for all x ∈ L.
-Ric(K x , ξ x ) = 0 for all x ∈ L. Then L is not compact. Moreover, if ξ is a complete vector field, then L is diffeomorphic to R × S.
Proof. If we call ζ = K and λ = λ • i, then ∇λ = 2ρζ and ∇ λ = 2ρξ. On the other hand, we have X(ρ) = 0 for all X ⊥ K, so if K x is not null for some x ∈ L, then ∇ρ x = ξ(ρ)K x . If K x is null, then K x = N x and since the screen distribution and K x is orthogonal to K x itself we get ∇ρ x = ξ(ρ)N x = ξ(ρ)K x . Thus, in any case ∇ρ = ξ(ρ)K and thus ∇ ρ = ξ(ρ)ξ = − 1 n−1 Ric(K, ξ)ξ, where as before ρ = ρ • i. If L is compact, then λ and ρ have a critical point, which contradicts the hypotheses.
For the last part, since K is closed we have that ξ is g-unitary and closed and we can check as in [12, Proposition 2.1] that the flow Φ of ξ gives us a covering map Φ : R × S → L, being S a leaf of the screen. We have that both λ and ρ are constant through the leaves of the screen and by hypotheses, fixed x ∈ S, we have that λ (Φ s (x)) or ρ (Φ s (x)) are strictly monotone functions. Therefore, Φ : R × S → L is injective and so a diffeomorphism.
Observe that from [15,Theorem 18], in the above situation we can scale the rigging to obtain a geodesic rigged vector field, i.e., τ (ξ) = 0. On the other hand, since g(ξ, ∇ρ) = Ric(K, ξ), to ensure that L is not compact under the assumption Ric(K, ξ) = 0 we do need to suppose that K x ∈ T x L for all x ∈ L.
We focus now on the case where the closed and conformal vector field is tangent to the null hypersurface and we give sufficient conditions to ensure that the null hypersurface is an orthogonal leaf in this situation.
Proposition 2.
Let K ∈ X(M ) be a closed and conformal vector field and L a null hypersurface such that K x ∈ T x L for all x ∈ L. If L has a screen nondegenerate second fundamental form, then K x = ν(x)ξ x for all x ∈ L and L is an orthogonal leaf of K.
In particular, if L is totally umbilical with never vanishing null mean curvature, then L is an orthogonal leaf of K.
Proof. Since g(K, ξ) = 0, Lemma 6 gives us A * (P S (K)) = 0, but being the null second fundamental form of L non-degenerate, we have P S (K) = 0 and K = νξ. Therefore L is an orthogonal leaf of K. Theorem 6. Let K ∈ X(M ) be a closed and conformal vector field and L a null hypersurface with zero null mean curvature such that K x ∈ T x L for all x ∈ L. Suppose that ζ is a rigging for L such that.
-C(ξ, X) = 0 for all X ∈ Γ (S). If (n − 1)(n − 2)ρ 2 ≤ Ric(K, K) and K p is null for some point p ∈ L, then K x = ν(x)ξ x for all x ∈ L and L is a totally geodesic orthogonal leaf of K.
Proof. As in Theorem 1 we can take a restricted rigging in a neighbourhood of p ∈ θ ⊂ L such that τ = 0 and C(ξ, X) = 0 for all X ∈ Γ (S). Since K is tangent to L, then g(K, ξ) = 0. Moreover, from Lemma 4 we know that the function λ = g(K, K) holds ∇λ = 2ρK and so g(∇λ, ξ) = 0.
Since λ has a minimum at p, then λ vanishes in θ and by connectedness in the whole L. Therefore, K = νξ for certain ν ∈ C ∞ (L) and since H = 0 and the orthogonal leaves of K are totally umbilical, then L is a totally geodesic orthogonal leaf of K.
As in Remark 2, if we suppose in the above theorem that τ = 0 then we can conclude that ν is a constant.
Finally, the following result gives us conditions for a closed and conformal vector field in a null hypersurface to be tangent to the screen distribution. Theorem 7. Let K ∈ X(M ) be a closed and conformal vector field and L a null hypersurface such that K x ∈ T x L for all x ∈ L. If there is a preferred rigging ζ for L, K p ∈ S p for some p ∈ L, g(K, N ) is signed and 0 ≤ g(K, N )ρH, then K x ∈ S x for all x ∈ L.
Author contributions All the authors of the paper have contributed equally. | 6,804 | 2022-05-03T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Production of biopolymer precursors beta-alanine and L-lactic acid from CO2 with metabolically versatile Rhodococcus opacus DSM 43205
Hydrogen oxidizing autotrophic bacteria are promising hosts for conversion of CO2 into chemicals. In this work, we engineered the metabolically versatile lithoautotrophic bacterium R. opacus strain DSM 43205 for synthesis of polymer precursors. Aspartate decarboxylase (panD) or lactate dehydrogenase (ldh) were expressed for beta-alanine or L-lactic acid production, respectively. The heterotrophic cultivations on glucose produced 25 mg L−1 beta-alanine and 742 mg L−1 L-lactic acid, while autotrophic cultivations with CO2, H2, and O2 resulted in the production of 1.8 mg L−1 beta-alanine and 146 mg L−1 L-lactic acid. Beta-alanine was also produced at 345 μg L−1 from CO2 in electrobioreactors, where H2 and O2 were provided by water electrolysis. This work demonstrates that R. opacus DSM 43205 can be engineered to produce chemicals from CO2 and provides a base for its further metabolic engineering.
Introduction
Climate change and global warming necessitate a shift to carbon-neutral chemical manufacture. Consequently, interest in development of bioprocesses employing autotrophic microbial hosts for production of chemicals from CO 2 is constantly growing (Liu et al., 2020). By virtue of metabolic engineering, autotrophic microorganisms can be equipped with synthetic production routes for conversion of CO 2 into various valuable compounds such as material precursors, flavors and biofuels (Hu et al., 2019). Aerobic and autotrophic species of hydrogenoxidizing bacteria are particularly attractive production hosts because, in contrast to, e.g., photosynthetic organisms, they are able to assimilate CO 2 under dark conditions. These bacteria, also called Knallgas bacteria, employ either the Calvin-Benson-Bassham (CBB) cycle or the reductive tricarboxylic acid cycle for CO 2 fixation and they gain energy from H 2 oxidation by the action of hydrogenases (Arai et al., 2010;Emerson and Stephanopoulos, 2019). The substrate H 2 can be generated, e.g., by water electrolysis using renewable energy (solar or wind) (Nikolaidis and Poullikkas, 2017). Aerobic hydrogenoxidizing bacteria couple the electrons from oxidation of H 2 to the electron transfer chain with O 2 as the final electron acceptor for the respirative adenosine triphosphate (ATP) generation. Therefore, these bacteria can generate more energy to produce biomass and complex natural products such as polyhydroxyalkanoates compared to, e.g., acetogenic bacteria with the anaerobic Wood-Ljungdahl pathway for CO 2 fixation (Teixeira et al., 2018;Brigham, 2019).
However, although several species of hydrogen-oxidizing bacteria have been identified, much of their metabolic potential remain untapped because the research and production strain development have concentrated on the species C. necator H16 (Brigham, 2019). The facultative lithoautotrophic strain R. opacus DSM 43205 (formerly referred to as Nocardia opaca 1b) is metabolically different from C. necator H16 and thus an interesting alternative host for conversion of CO 2 to chemicals (Probst and Schlegel, 1973;Aggag and Schlegel, 1974;Klatte et al., 1994). Unlike C. necator H16, that uses polyhydroxybutyrate as a carbon sink, R. opacus naturally accumulates fats (i.e., triacylglycerols), is Grampositive, and contains only a cytoplasmic NAD + -reducing hydrogenase but no membrane-bound isoenzyme (Aggag and Schlegel, 1974;Schneider and Schlegel, 1977). The cytoplasmic hydrogenase is, however, very similar to the one of C. necator H16 with respect to its catalytic and molecular properties (Schneider et al., 1984). Both enzymes consist of four major subunits harboring a bound catalytic [NiFe] center along with several iron-sulfur clusters and monomeric flavin mononucleotides bound in the auxiliary subunits. The enzymes required for lithoautotrophic growth of R. opacus DSM 43205 include the cytoplasmic hydrogenase and ribulose-1,5-bisphosphate carboxylase (RuBisCo) that are encoded on the linear extrachromosomal conjugative plasmid pHG201 (Kalkus et al., 1990;Grzeszik et al., 1997). This genetic feature is shared between R. opacus DSM 43205 and C. necator H16 of which the latter contains these lithoautotrophy-related genes on the megaplasmid pHG1 (Schwartz et al., 2003).
Heterotrophic R. opacus strains have been studied due to their oleaginous metabolism and versatile biodegradation pathways (Holder et al., 2011;Henson et al., 2018). Genetic engineering tools are established for R. opacus and used, e.g., for the production of fatty acids, wax esters and alkanes (Huang et al., 2016;Lanfranconi and Alvarez, 2017;Kim et al., 2019). In the present work, we engineered the autotrophic R. opacus DSM 43205 strain to convert CO 2 into beta-alanine and L-lactic acid by expressing heterologous genes encoding aspartate decarboxylase and L-lactate dehydrogenase, respectively. Beta-alanine is a desired precursor for the synthesis of different polymers in the chemical industry (i.e., polyacrylamide, polyacrylate, polyacrylonitrile, and nylon-3) (Könst et al., 2009;Steunenberg et al., 2013;Ko et al., 2020). In addition, beta-alanine has commercial relevance as a nutritional supplement and is a precursor of pantothenate (vitamin B 5 ), coenzyme A and pharmaceutical drugs (Khan et al., 2010;Trexler et al., 2015). In prokaryotes, beta-alanine is mainly produced by decarboxylation of L-aspartate by L-aspartate 1-decarboxylase (Cronan, 1980) ( Figure 1). Extensive metabolic engineering of Escherichia coli and Corynebacterium glutamicum has already resulted in strains producing 85 g/L and 56 g/L beta-alanine from glucose, respectively (Wang et al., 2021;Li et al., 2022).
L-lactic acid, selected as a second production target, is used as a precursor for the production of polymers and has multiple applications in chemical and other industries (Eiteman and Ramalingam, 2015). Several companies have already successfully commercialized its heterotrophic microbial production that involves a single-step enzymatic reduction of pyruvate by lactate dehydrogenase (Figure 1). Bacterial and yeast hosts and their engineering for L-lactic acid production has been recently reviewed (Abedi and Hashemi, 2020). In addition, several other products can be envisioned for storing CO 2 in the future, e.g., accumulation intracellular lipids (Kim et al., 2019), by the autotrophic R. opacus DSM 43205.
In this study, the autotrophic production of beta-alanine and L-lactic acid from CO 2 and H 2 as sole carbon and energy sources, respectively, is demonstrated using engineered derivatives of the R. opacus strain DSM 43205. Heterologous genes encoding aspartate decarboxylase (for beta-alanine production) or lactate dehydrogenase (for lactic acid production) were expressed from plasmid and the autotrophic production of both polymer precursors was performed by gas fermentation during which H 2 was fed into the cultivation alongside with air and CO 2 (Yu, 2018). Beta-alanine was additionally synthesized in a bioreactor where H 2 was produced electrolytically in situ at a submerged cathode with concomitant oxygen production at an anode (referred here as electrobioreactor) (Schlegel and Lafferty, 1965). Moreover, in order to facilitate further development of this autotrophic production host, R. opacus DSM 43205 was sequenced by employing a combination of the long and short read technologies, leading to an improved genome assembly with higher coverage and fewer contigs as compared to the previously published genome (Sangal et al., 2016). The assembled genome was further used to construct a genome-scale stoichiometric metabolic model for R. opacus DSM 43205. The model was utilized to simulate the carbon and energy requirement for betaalanine and lactic acid production with respect to growth.
2 Materials and methods 2.1 Genome sequencing, assembly, and annotation Sequencing libraries were prepared by BaseClear BV (Leiden, Netherlands), using an Illumina Nextera XT kit for short-read sequencing and a 10 kb PacBio library preparation technique for long-read sequencing. BaseClear BV (Leiden, Netherlands) performed short-read Illumina-based paired-end sequencing (HiSeq 2500, 2x125bp) at a depth of 200 Mb, and long-read sequencing using the PacBio Sequel SMRT platform. BaseClear BV (Leiden, Netherlands) performed the quality filtering and delivered the resulting filtered raw data of the sequence reads. We used FastQC (version 0.11.7) for analysing the quality of raw sequencing reads to confirm that the short-read data showed high base-call quality across all the bases. As the PacBio Sequel platform does not report the quality values for the base calls, the long-read data was excluded from the post-hoc FastQC-based quality analysis. De novo genome assembly was performed using Unicycler v0.4.6 (Wick et al., 2017), utilizing the combination of the PacBio-derived long reads and Illumina-derived short reads. Gene prediction was performed using the bacterial genome annotation pipeline Prokka v1.14.5 (Seemann, 2014), and functional annotations were performed using eggNOG (Huerta-Cepas et al., 2016) and Pannzer (Törönen et al., 2018). The metabolic functional annotations were visualized together with the metabolic annotations of R. opacus strain PD630 (obtained from KEGG) using iPATH3 (https:// pathways.embl.de/). The annotated genome sequence was submitted to European Nucleotide Archive (ENA) (Harrison et al., 2021) in the project with the accession number of PRJEB45460.
Genome-scale metabolic model reconstruction and simulations
A genome-scale metabolic model for R. opacus DSM 43205 was reconstructed using the automated reconstruction tool CarveMe (Machado et al., 2018). The bacterial universal metabolic model constructed based on the reactions obtained from BiGG models database (King et al., 2016) was used as the reference. The proteome sequence of R. opacus DSM 43205 derived in the gene prediction step was used to calculate the reaction scores for the model reconstruction as follows. The R. opacus DSM 43205 protein sequences were aligned to the sequences of the BiGG genes using Diamond (Buchfink et al., 2014) and the best alignment score for each BiGG gene was used as gene-level score. The gene-level scores were converted to reaction-level scores via the gene-proteinreaction rules as described in detail by (Machado et al., 2018). Briefly, the protein-level scores were calculated as the average gene-level score of all subunits in each protein complex, and the maximum of the protein-level scores of all isozymes that catalyze each reaction was used as the reaction-level score. The reactionlevel scores were normalized with the median reaction score. Enzyme-catalyzed reactions without genetic evidence were given a score of −1 and the spontaneous reactions were assigned a score of zero. The R. opacus DSM 43205 genome-scale metabolic model reconstruction was performed with the python-based metabolic modelling package framed (https://github.com/ Frontiers in Bioengineering and Biotechnology frontiersin.org 03 cdanielmachado/framed), using the IBM ILOG CPLEX LP-solver v. 12.8.0 function cplexlp. The model simulations were performed using cplexlp and package cobra v. 0.20.0, and processed using package numpy v. 1.19.5. Flux distributions in optimal H 2 limited growth, beta-alanine, and lactic acid production were simulated using parsimonious flux balance analysis and visualized on the KEGG map of Microbial metabolism using iPATH3 (https:// pathways.embl.de/). Relative flux values were reflected by the arrow thickness with the exception that all fluxes less than or equal to 0.04 relative to H 2 utilization were represented as 0. 04 relative to H 2 utilization for the visualization.
Strain and plasmid construction
Synthetic genes encoding aspartate 1-decarboxylase (panD) of C. glutamicum (NF003947.0) and lactate dehydrogenases of Plasmodium falciparum (Pfldh) (WP_074506212.1) and Lactobacillus helveticus (Lhldh) (WP_012211363.1) were ordered from Thermo Fisher Scientific. All three genes were optimized for expression in R. opacus and the optimized gene sequences are listed in Supplementary Material S1, Table 1. The plasmids pDD57 and pDD65 were kindly provided by Drew M. DeLorenzo (Washington University in St. Louis, MO) . pDD65 is an empty plasmid with a kanamycin marker and pAL5000 (S) backbone (Ranes et al., 1990) that was used to construct pDD57 by insertion of the gene gfp+ encoding a modified green fluorescent protein under the control of a strong constitutive promoter of Streptomyces lividans TK24 . In this study, gfp+ in pDD57 was replaced with either panD, Pfldh or Lhldh by releasing gfp+ with NdeI + BamHI digestion and using the remaining plasmid backbone in Gibson assembly cloning with panD, Pfldh, or Lhldh (Gibson et al., 2009). The resulting expression vectors for panD, Pfldh, and Lhldh and the empty vector pDD65 were introduced into R. opacus cells by electroporation. This was done by first growing R. opacus in 50 ml of tryptic soy broth overnight at 30°C followed by a cell harvest by centrifugation. The cells were washed twice with cold 20 mM HEPES buffer (pH 7.2) containing 15% glycerol and once with 5 mM HEPES (pH 7.2) containing 15% glycerol. Then, cells were suspended in 800 μl of 5 mM HEPES buffer (pH 7.2) containing 15% glycerol after which 1 μg of plasmid DNA was added to 80 μl of the cell suspension, mixed and subjected to electroporation in 1-mm cuvettes with the following settings: 25 μF, 400 Ω, 2.5 kV. Subsequently, the cells were incubated at 30°C for 3 h in 800 μl of super optimal broth with catabolite repression (SOC medium) before plating them on tryptic soy agar (TSA) plates containing 50 μg ml −1 kanamycin. The presence of expression vectors in the resulting R. opacus transformant strains ROP-pDD65 (ctrl), ROP-PfLdh, ROP-LhLdh, and ROP-PanD was verified by colony PCR using the plasmid-specific DNA oligomers and DreamTaq DNA polymerase (Thermo Fisher Scientific). The constructed plasmids and strains are listed in Table 1.
Growth media
All cultivations were carried out on a modified DSM-81 mineral medium (www.dsmz.de/microorganisms/medium/pdf/ DSMZ_Medium81.pdf). The major chloride salts in the original recipe were replaced with the corresponding sulfates to reduce the formation of chlorine gas during electrolysis in electrobioreactor cultivations. Additionally, the vitamins were omitted from the recipe since they did not enhance bacterial growth (data not shown). Furthermore, beta-alanine is a precursor of one of the B vitamins (pantothenate) and therefore the presence of pantothenate in medium could interfere with the beta-alanine synthesis. The final medium composition per 1 L was: 2.3 g KH 2 PO 4 , 2.9 g Na 2 HPO 4 ·2H 2 O, 5.45 g Na 2 SO 4 , 1.19 g (NH 4 ) 2 SO 4 , 0.5 g MgSO 4 ·7H 2 O, 11.7 mg CaSO 4 ·2H 2 O, 4.4 mg MnSO 4 ·H 2 O, 5 mg NaVO 3 , 0.5 g NaHCO 3 , 5 mg ferric ammonium citrate, 0.5 mg ZnSO 4 ·7H 2 O, 1.5 mg H 3 BO 3 , 1 mg CoCl 2 ·6H 2 O, 50 μg
Plasmids
Description Reference pDD65 R. opacus empty expression containing the pAL5000 (S) vector backbone and a kanamycin marker Ranes et al. (1990) 2.5 Inocula and shake flask cultivations R. opacus transformants were maintained on TSA plates containing 50 μg ml −1 kanamycin and in each case three individual transformants were studied. Inocula for glucose cultivations were grown at 30°C in 10 ml of modified DSM-81 media supplemented with 20 g L −1 glucose and 50 μg ml −1 kanamycin in 50 ml Erlenmeyer flasks. Shake flask cultivations performed with glucose as carbon source were carried out in 250 ml Erlenmeyer flasks in 50 ml of the same medium that was used to grow the inocula. Cultivations were started from optical density OD 600nm of 0.1 and incubated at 30°C with 220 rpm shaking. Inocula for shake flask cultivations and for cultivations in electrobioreactors both supplemented with CO 2 as carbon source were prepared by transferring a loop-full of cells from a TSA plate into 100 ml Erlenmeyer flasks containing 20 ml of modified DSM-81 media supplemented with 50 μg ml −1 kanamycin. Precultures were incubated in a sealed container with 130 rpm shaking into which a gas mix (49% N 2 , 25% CO 2 , 13% H 2 , and 13% O 2 ) was fed at a flow rate of 32 ml min −1 until OD 600nm was 4-5. Shake flask cultivations were started from OD 600nm of 0.1 and carried out in 20 ml of same media and under the same conditions as precultures.
Electrobioreactor cultivations
Electrobioreactor cultivations were performed in MR-1194 Bulk Electrolysis cell vials (100 ml; BASi, West Lafayette, IN) with custom-made Teflon lids as previously described by Nyyssölä et al. (2021). R. opacus preculture was added to 70 ml of modified DSM-81 medium supplemented with 13 μl of Componenta VO antifoam (Ecolab, Oegstgeest, Netherlands) to an OD 600nm of 0.2. A filter-sterilized gas mix consisting of 20% CO 2 and 80% N 2 (AGA, Espoo, Finland) was humidified by bubbling through sterilized water before sparging it into the reactor liquid at a flow rate of 6 ml min −1 . The reactor temperature was maintained at 30°C with water circulated through the heating jacket of the reactor using two external water baths (Julabo, Seelbach, Germany and VWR International, Radnor, PA, United States). The reactor liquid was agitated by magnetic stirring at 400 rpm. A coiled titanium wire coated with a thin layer of iridium oxide (Ø 1.5 mm, Magneto Special Anodes, Schiedam, Netherlands) was used as anode and a coiled stainless steel capillary (Ø 1.6 mm, 316L-SS, Pfeiffer Vacuum GmbH, Asslar, Germany) was used as cathode with surface areas of 12.3 and 14.1 cm 2 , respectively. The voltage and current were controlled using a Wavenow potentiostat (Pine Research Instrumentation, Grove City, PA), and the AfterMath software (Version 1.3.7060, Pine Research Instrumentation). In situ water electrolysis was performed at a current of 18 mA (chronopotentiometry).
2.7 Analytical methods 2.7.1 Light absorbance measurements to monitor cell growth Biomass growth was measured from cultivations by taking 1-2 ml samples and measuring its optical density at λ = 600 nm (OD 600nm ) using a UV-1201, UV-vis spectrophotometer (Shimadzu, Kyoto, Japan). Highly dense samples were diluted to obtain an OD 600nm in the range of 0.1-0.3. The cell dry weight (CDW) was measured from 2 ml cultivation samples by separating the cells from the soluble culture fraction by centrifugation and washing them twice with MilliQ water before drying at them 105°C overnight. Alternatively, the CDW per litre of culture broth was calculated from the OD 600nm of the culture using a previously determined standard curve.
Quantification of glucose and L-lactic acid by high-performance liquid chromatography
The extracellular concentrations of glucose and L-lactic acid from glucose-supplemented shake flask cultivations were determined by high-performance liquid chromatography (HPLC) on Fast acid and Aminex HPX-87H columns (BioRad Laboratories, Hercules, CA) with 2.5 mM H 2 SO 4 as eluant and a flow rate of 0.5 ml min −1 . The column was maintained at 55°C and analyte elution was detected using a Waters 410 differential refractometer and a Waters 2487 dual wavelength UV (210 nm) detector (Waters, Milford, MA).
Quantification of L-lactic acid by combined gas chromatography and mass spectrometry
Extracellular concentration of L-lactic acid from CO 2cultivations was analysed using an 6,890 gas chromatograph combined with a 5,973 mass selective detector (Agilent, Santa Clara, CA). Each cell culture supernatant sample (50 μl) was spiked with internal standard (10 μl of 3-hydroxybutyric acid-1,3-13C2 acid) and the sample was evaporated into dryness under N 2 flow. The dried residues were derivatized with a mixture of 50 µl of pyridine and 50 µl of N-Methyl-N-(trimethylsilyl) trifluoroacetamide reagent containing 1% of trimethylchlorosilane as a catalyst (70°C, 60 min). The injector (injection volume 1 µl) and inlet temperature was 250°C, and the oven temperature was increased from 50°C to 310°C. The analyses were performed on an DB-5MS capillary column (30 m, ID 250 μm, film thickness 0.25 μm; Part-No. 122-5532; Agilent). Lactic acid was quantified by monitoring Frontiers in Bioengineering and Biotechnology frontiersin.org its m/z ion ratio of 191. The calibration range for lactic acid was 0.3-33 μg per sample.
Quantification of beta-alanine concentration by high-performance liquid chromatography
Extracellular beta-alanine concentration was analysed by ultra-performance liquid chromatography (UPLC). 250 μl of cell culture supernatant was deproteinized by adding 750 μl of ethanol (99.5%), the samples were mixed and centrifuged. The supernatant was transferred to a new vial and the samples were concentrated under a stream of N 2 . Finally, the volume was adjusted to 80 and 20 μl of borate buffer (MassTrak ™ , Waters) was added. 10 μl of the sample solution was analysed. The internal standard solution (norvaline, 25 μM), MassTrak ™ Amino Acid Analysis (AAA) borate buffer and 6aminoquinolyl-N-hydroxysuccinimidyl carbamate reagent were added, and sample mixture was instantly vortexed before incubation at 55°C for 10 min. Amino acid standard mixtures were derivatized identically to samples.
UPLC analysis was performed using an Acquity UPLC system equipped with an UV detector (Waters, Milford, MA, United States). Chromatography was performed using an Acquity MassTrak ™ (2.1 mm × 100 mm, 1.7 μm) column (Waters) and kept at 43°C. The injection volume was 1 μl. Separation was performed using gradient elution with 10% (v/ v) MassTrak ™ AAA eluent A concentrate in water and MassTrak ™ AAA eluent B at a flow rate of 0.4 ml min −1 using a gradient elution program. The signal for beta-alanine was detected at 260 nm. MassTrak ™ AAA derivatization kit, Mass TRAK ™ AAA concentrate A and eluent B were obtained from Waters. Amino acid standard solution, physiological amino acid standards, L-isoleucine, glutamine, and norvaline were obtained from Sigma-Aldrich (St. Louis, MO).
Genome analysis of R. opacus DSM 43205
The genome sequence of R. opacus DSM 43205 was determined by combined short-and long-read sequencing in order to obtain a data of sufficient sequencing coverage and accuracy that allows identification of the autotrophy-related genes and a construction of a metabolic model. The Illumina data set contained over 5.53 million paired-end reads, with lengths that ranged from 50 to 126. The PacBio-based long read data set contained over 758 × 10 3 reads with lengths that ranged from 50 to 41,685. The Unicycler-based genome assembly contained eighteen contigs (length >1,500 bases) together representing a genome of 89,42,682 bases. The largest contig (and N50) was 6,484,583 bases long. The GC content of the assembly was 67%. A number of (n = 8,418) gene coding sequences (CDS) were found in the genome assembly. Of these, majority of the CDS (n = 5,999) were found on the largest contig. A number of the sequences (n = 2,684) were identified as metabolic enzymes with known EC numbers, which mapped to over 145 metabolic pathways in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The pathway coverage of metabolic enzymes was found similar to R. opacus PD630 (Supplementary Material S1; Figure 1). The gene encoding RuBisCo (EC:4.1.1.39) was found in the genome assembly (on the fifth largest contig) confirming the presence of the CBB cycle, along with the hydrogenase genes (NADreducing hydrogenase HoxS subunits, EC: 1.12.1.2 in contigs 2, and 5) (Supplementary Material S1; Figure 2). Together, these markers confirmed the experimentally observed CO 2 metabolism of the R. opacus DSM 43205 strain and suggest that the fifth contig of the genome assembly represents the autotrophic plasmid pHG201 described earlier (Kalkus et al., 1990;Grzeszik et al., 1997;Schwartz, 2009).
Genome-scale metabolic model simulations
Automatically reconstructed genome-scale metabolic model of R. opacus DSM 43205 (containing 2,267 reactions and 1,499 metabolites) was manually curated for aerobic autotrophic growth with soluble hydrogen hydrogenase with oxygen as the final electron acceptor. The model-predicted that the H 2 -CO 2 utilization ratio for optimal growth was 4.11 mol H 2 per mol CO 2 without considering H 2 oxidization required for growth rate -independent maintenance. At higher ratio, the growth was predicted to be limited by the availability of CO 2 , whereas at lower utilization ratio the growth was predicted to be H 2 -limited. Model simulations were performed also for predicting H 2 -CO 2 utilization ratios required for optimal lactic acid and beta-alanine production by R. opacus DSM 43205. The H 2 to CO 2 utilization ratios for the optimal synthesis of these compounds were predicted with model simulations very similar for the two products and slightly lower than for growth. The predicted utilization ratios were 3.07 mol H 2 per mol CO 2 and 3.19 mol H 2 per mol CO 2 for L-lactic acid and beta-alanine, respectively. The predictions did not consider growth rateindependent requirement of H 2 oxidation for maintenance energy. The required maintenance energy would be expected to cause equal additive increase in the H 2 utilization for growth and production but would depend on the extracellular conditions (Bongers, 1970). Thus, if the cells utilized H 2 and CO 2 as optimal for growth, CO 2 fixation would limit the production. Based on the simulations, there were no large differences in the H 2 -CO 2 utilization ratio for optimal growth, nor for production of either of the products. A schematic figure (Figure 2) shows also how the flux distributions in optimal growth, beta-alanine, and lactic acid Frontiers in Bioengineering and Biotechnology frontiersin.org production are notably similar excluding the biosynthetic fluxes that are low relative to the central metabolic fluxes. Therefore, the cultivations were performed in identical gas feeding ratios. Thus, in addition to suggesting beneficial genetic modifications, this manually curated model can help in designing the experimental set-up.
Heterotrophic and autotrophic L-lactic acid production
L-lactate production was studied by expressing genes encoding L-lactate dehydrogenases of P. falciparum (Pfldh) and L. helveticus (Lhldh) in R. opacus DSM 43205. These enzymes have different kinetic properties and they both have been expressed before for heterologous L-lactic production (Novy et al., 2017). L-lactate dehydrogenases were cloned into the expression plasmid pDD57 where their expression was controlled by a constitutive promoter of S. lividans TK24. R. opacus DSM 43205 was transformed with either the pDD57 construct harbouring Pfldh or Lhldh yielding the strains ROP-PfLdh and ROP-LhLdh, respectively. First, in order to confirm the activity of expressed L-lactate dehydrogenases in R. opacus, L-lactic acid production by three ROP-PfLdh and three ROP-LhLdh transformants was studied in shake flask cultures supplemented with 20 g L −1 glucose. The R. opacus strains ROP-PfLdh and ROP-LhLdh produced up to 742 mg L −1 and 608 mg L −1 L-lactic acid at rates of 43 ± 8 mg L −1 h −1 and 37 ± 4 mg L −1 h −1 and specific productivities of 20 ± 2 mg g CDW −1 h −1 and 17 ± 2 mg g CDW −1 h −1 , respectively ( Figure 3A; Table 2). During both cultivations, L-lactic acid accumulated at the beginning of the cultivations during the 20 h phase when both, glucose consumption and biomass accumulation occurred at low rates. During the following 10 h, cells rapidly consumed the produced L-lactic acid and almost all glucose and accumulated biomass until 53 h as indicated by an increase of OD 600nm up to 35. The pH in the cultivations of the L-lactic acid producing strains were lower compared with the control strain with the empty expression vector during the L-lactic acid production phase but did not remarkably differ from the control at the later stages of cultivations ( Figure 3B). Interestingly, the L-lactic acid producing strains consumed glucose slightly faster and reached a somewhat higher biomass than the control strain. The control strain ROP-pDD65 with the empty plasmid did not produce L-lactic acid. Homology search of the genome sequence of R. opacus DSM 43205 did not suggest the presence of L-lactate dehydrogenase in R. opacus that is in accordance with its inability to produce L-lactic acid. Instead, R. opacus DSM 43205 genome has homologs for L-lactate cytochrome c reductase that links lactate oxidation to electron transfer to mitochondrial respiratory chain (Ingledew and Poole, 1984), for lutABC operon that is present in genomes of various bacteria and is linked to lactate utilization and biofilm formation (Chai et al., 2009) and for lactate-2-mono-oxygenase (Kean and Karplus, 2019).
Next, L-lactic acid production from CO 2 was investigated. Both strains, ROP-PfLdh and ROP-LhLdh were grown under autotrophic conditions in shake flask cultures under a gas atmosphere with the composition of 49% N 2 , 25% CO 2 , 13% H 2 , and 13% O 2 . Growth of the control strain and ROP-PfLdh Figures 3C,D). Especially in case of ROP-PfLdh, there was a large variation in growth and L-lactic acid production between the transformants studied ( Figure 3D). The specific L-lactic acid productivities of both R. opacus strains ROP-PfLdh and ROP-LhLdh under autotrophic growth conditions were 0.6 ± 0.3 mg g CDW −1 h −1 and 0.3 ± 0.1 mg g CDW −1 h −1 , respectively, and significantly lower than those that were measured when glucose was used as a carbon source (Table 2). Similar to heterotrophic cultivations, the highest TABLE 2 Volumetric and specific L-lactic acid and beta-alanine production rates of R. opacus strains measured during autotrophic electrobioreactor (EB-CO 2 ) and shake flask cultivations with either glucose (SF-Glc) or CO 2 (SF-CO 2 ) as carbon source. In all performed EB-CO 2 trials, L-lactic acid concentrations remained below the detection limit and are not shown here. The shown data is determined from triplicate measurements (i.e., three individual transformant cultivations per strain) and the CDW-specific productivity was determined from data recorded from the cultivation start until the time of measured maximum product concentration (see Figures 3, 4). L-lactic acid accumulation occurred during the early exponential growth phase of strains ROP-PfLdh and ROP-LhLdh. Notably, ROP-LhLdh produced less L-lactic acid and more biomass than ROP-PfLdh while transformants of the latter strain accumulated less biomass and produced over two fold more L-lactic acid. In contrast, no significant difference was observed in growth and L-lactic acid production between the strains ROP-PfLdh and ROP-LhLdh during the heterotrophic cultivations ( Figure 3A). Finally, characteristics of the strains and L-lactic acid production was studied in electrobioreactors with 20% CO 2 introduced into the cultivation broth by sparging. H 2 and O 2 were synthesized in situ by water electrolysis that was performed at a constant current of 18 mA. Under these conditions, both ROP-PfLdh and ROP-LhLdh exhibited cell growth ( Figure 3E) but L-lactic acid accumulation could not be detected. The ROP-LhLdh transformants grew to higher cell densities than the control strain similar to shake flask cultures that were grown with CO 2 as carbon source. pH of the fermentations was followed regularly and the drop of pH correlated with growth of the strains. The final pH of all fermentations remained above pH 5.5 ( Figure 3E).
Heterotrophic and autotrophic betaalanine production
The gene encoding aspartate 1-decarboxylase (panD) of C. glutamicum that showed relatively good beta-alanine productivity in heterotrophic production in E. coli (Song et al., 2015) was cloned into the expression plasmid pDD57 and placed under the control of a constitutive promoter of S. lividans TK24. R. opacus DSM 43205 was transformed with this vector resulting in R. opacus strain ROP-panD. R. opacus strain ROP-panD and the control strain ROP-pDD65 exhibited very similar growth profile during heterotrophic cultivation on glucose ( Figure 4A) and reached an OD 600nm of up to 38. The beta-alanine concentration in the culture broth of R. opacus strain ROP-PanD reached 25 mg L −1 corresponding to a specific productivity of 0.2 ± 0.002 mg g CDW −1 h −1 . Notably, in glucose cultivations beta-alanine production was also detected in the control experiment with strain pDD65 yielding product concentrations of up to 1.6 mg L −1 . In both cases, beta-alanine concentration increased during the early growth phase when glucose was still consumed at a relatively low rate. Thereafter, during late exponential growth
FIGURE 4
Heterotrophic and autotrophic shake-flask cultivation [ (A,B), respectively] and autotrophic electrobioreactor cultivation (C) of transformants of R. opacus strain ROP-PanD expressing the aspartate decarboxylase gene of C. glutamicum (panD). Beta-alanine concentration and cell density (OD 600 ) were followed during the course of the cultivations and displayed as average of biological triplicates with standard deviation. R. opacus pDD65 is included as a control strain (ctrl).
Frontiers in Bioengineering and Biotechnology frontiersin.org phase, the beta-alanine concentration rapidly decreased by the end of the cultivation. Beta-alanine concentration of shake flask cultivations of R. opacus ROP-PanD reached maximum values of 1.8 mg L −1 when the energy and carbon substrates were provided in gaseous form (49% N 2 , 25% CO 2 , 13% H 2 , and 13% O 2 ) ( Figure 4B). The specific beta-alanine productivity of autotrophic shake flask cultivations was 0.01 ± 0.003 mg g CDW −1 h −1 , which corresponded to just~5% of the value measured for the heterotrophically grown strain (Table 2). It is noteworthy that, during the provision of CO 2 as an exclusive carbon source, betaalanine was produced during a ca. 100 h lag phase before growth initiation and it was then consumed during the subsequent growth phase as apparent by observed reduction of product concentration. Beta-alanine producing transformants reached 17% higher OD 600nm values than the control strains (OD 600nm 5 vs. 4.2). In contrast to L-lactic acid, beta-alanine production was demonstrated also in electrobioreactor sparged with 20% CO 2 and with provision of H 2 and O 2 by water electrolysis at a constant current of 18 mA. Under these conditions, ROP-PanD strains produced 345 μg L −1 beta-alanine from CO 2 ( Figure 4C). Beta-alanine started to accumulate already during the lag phase of the cultivations, but the highest production coincided with the onset of the exponential growth phase. Subsequently, the produced beta-alanine was consumed from the media. ROP-panD strains reached 50% higher biomass (OD 600nm = 5.5) than the control strains.
Discussion
R. opacus strain DSM 43205 is an interesting representative of lesser-known aerobic facultative chemolithotrophs that could potentially be used for CO 2 -based chemical synthesis (Aggag and Schlegel, 1974;Brigham, 2019). In the present study, we sequenced the genome of R. opacus strain DSM 43205, build a model of its metabolism, and genetically engineered it for the conversion of either CO 2 or glucose to the biopolymer precursor beta-alanine or L-lactic acid. The latter was done by expression of recombinant genes encoding either aspartate decarboxylase or L-lactate dehydrogenase, respectively. Beta-alanine production from CO 2 was demonstrated also in electrobioreators where H 2 and O 2 were provided by in situ water electrolysis. Use of H 2 as a gaseous substrate in fermentations can be difficult due to its low solubility and flammability in the presence of oxygen at an oxidant concentration of 4%-94%. The in situ water electrolysis for hydrogen provision that was employed here could be considered as a sustainable and safe alternative to the conventional provision of H 2 gas from large reservoirs located outside of the bioreactor.
Beta-alanine is a precursor of pantothenate and coenzyme A biosynthesis and its biosynthesis involves at least three enzymatic reactions from phosphoenolpyruvate depending on whether aspartate is formed directly from oxaloacetate or through the TCA cycle (Piao et al., 2019) (Figure 1). The sequenced R. opacus genome encodes genes for both of these routes. It also possesses an endogenous gene for aspartate decarboxylase and a small amount of beta-alanine was produced in both glucose and CO 2 cultivations with the control strain. Overexpression of panD from C. glutamicum increased beta-alanine production significantly from both carbon sources. Interestingly, ROP-PanD strains produced also more biomass than the control from both glucose and CO 2 , especially in electrobioreactor cultivations. There is no evident reason for this but possibly consumption of the produced beta-alanine from the growth medium boosted the carbon metabolism and growth. Moreover, decarboxylation of aspartate to beta-alanine releases intracellular CO 2 that may be more readily available as a substrate for the CBB cycle than the extracellular CO 2 that is sparged into the growth medium.
No L-lactic acid production could be observed during hetero-or autotrophic cultivation of R. opacus strain DSM 43205. The production of L-lactic acid requires a single enzymatic step catalysing the reduction of the central carbon metabolite pyruvate. Here, we overexpressed L-lactate dehydrogenase genes from L. helveticus (Lhldh) and P. falciparum (Pfldh) with different catalytic properties in R. opacus. PfLdh has a significantly higher catalytic efficiency and affinity for pyruvate (K m = 0.03 mM) than LhLdh (K m = 0.25 mM) (Novy et al., 2017). ROP-PfLdh strains produced almost 2.5 times the amount of L-lactic acid from CO 2 in shake flask cultivations compared to ROP-LhLdh strains, which may be attributed to the high substrate affinity of PfLdh. PfLdh had also higher specific L-lactic acid productivity due to lower biomass production. Interestingly, both strains produced almost equal amounts of L-lactic acid from glucose, which is possibly due to a higher intracellular pyruvate concentration under these conditions. Surprisingly, during electrobioreactor cultivations no L-lactic acid production could be detected, although biomass production did not differ much from the autotrophic shake flask cultivations. Likewise to autotrophic shake flask cultivations, ROP-LhLdh strains grew to higher cell densities than control and ROP-PfLdh strains. It is possible that in small electrobioreactors water splitting with relatively low current resulted in limited generation of H 2 and produced L-lactic acid was possibly even more readily utilized by the cells for carbon and redox supply than in autotrophic shake flask cultures.
Beta-alanine and L-lactic acid offered a possibility to study chemical production from CO 2 from different metabolic routes; L-lactic acid being linked directly to central carbon metabolism and beta-alanine being a product from amino acid metabolism. Despite the difference in length of L-lactic acid and beta-alanine biosynthetic routes from pyruvate, the Frontiers in Bioengineering and Biotechnology frontiersin.org production of both compounds was observed already during the lag and early exponential growth phase, independent on whether glucose or CO 2 was available as the carbon source. Especially, the L-lactic acid production appeared to increase the initial glucose consumption rate in the beginning of the cultivations and was followed by a consumption of both L-lactic acid and beta-alanine. There is no clear explanation for a long lag phase of growth observed on both carbon sources and especially when cells were grown on CO 2 . Kim et al., 2019 examined the long 48 h lag phase of R. opacus PD630 in glucose fermentation and demonstrated that decrease of cultivation pH from 7.0 to 6.4 reduced the lag phase significantly to 24 h (Kim et al., 2019). However, economically viable production titers would require more elaborate metabolic engineering efforts. As an example, in E. coli, efficient production of beta-alanine from glucose requires overexpression of all genes encoding the enzymes of the reductive branch of TCA cycle and deletion of several pathways for side products (Piao et al., 2019;Zou et al., 2020). In E. coli, the uptake of beta-alanine is performed by an active amino acid transporter (Schneider et al., 2004). Deletion of the corresponding transporter present in R. opacus genome could possibly prevent the beta-alanine utilization from the surrounding growth medium. Likewise, improved L-lactic acid production would require more extensive metabolic engineering as exemplified by work carried out with E. coli (Jiang et al., 2017). The sequencing of the genome of R. opacus DSM 43205 revealed the presence of open reading frames with the homology to L-lactate cytochrome c reductase, to lutABC operon and to lactate-2-mono-oxygenase, all involved in lactate catabolism observed in our cultivations (Ingledew and Poole, 1984;Chai et al., 2009;Kean and Karplus, 2019). However, the inefficient homologous recombination in R. opacus hampered our efforts to delete these open readings from the genome despite the use of sacB counter selection and the help of bacteriophage recombinases described and used in earlier studies (Kita et al., 2009;DeLorenzo et al., 2018). Recently, fast-growing interest in metabolic versatility and use of R. opacus as a platform for biocatalysis, biodegradation and biosynthesis has driven development of more genome engineering tools for R. opacus. New genetic parts have been validated and most importantly CRISPR/Cas9-based gene knockout system is reported (Liang et al., 2021;Grechishnikova et al., 2022). These tools are anticipated to facilitate and accelerate the engineering of R. opacus DSM 43025 to yield outstanding strains for the production of biochemicals and biofuels from CO 2 .
Likewise, C. necator, the most well-known lithoautotroph, has been engineered for autotrophic production of various chemical compounds. The cultivation systems using shake flasks or classical bioreactor flushed with commercial gas mixtures with variable CO 2 , H 2 , and O 2 concentrations demonstrated typically mg/L proof of concept production levels, comparable to L-lactic acid produced in present study (Müller et al., 2013;Grousseau et al., 2014;Przybylski et al., 2015;Crépin et al., 2016;Chakravarty and Brigham, 2018;Krieg et al., 2018;Windhorst and Gescher, 2019). The different cultivation conditions applied and in most cases compromised and growth limiting gas supply make the comparison of performance of R. opacus DSM 43205 to C. necator production strains difficult. The recent studies with C. necator indicated the importance of optimizing the gas fermentation process and bioreactor design for improved product yield (Garrigues et al., 2020). This enables also achievement of higher biomass and more realistic conditions for determination of product titers, productivity and yields.
The observed predominant assimilation of externally supplied CO 2 into biomass instead of products, motivates further metabolic studies of R. opacus DSM 43025 that may enhance its industrial applicability for chemical production. The genome sequence refinement and genome-scale model reconstruction promote the discovery of further engineering strategies, including in silico strain design (Fang et al., 2020) for R. opacus DSM 43025 and provide data needed for its development into a potent autotrophic production host.
Conclusion
In summary, we have shown that the hydrogen-oxidizing bacterium R. opacus DSM 43205 can be engineered to synthesize value-added compounds using CO 2 as exclusive carbon and hydrogen as energy source. Both, conventional gas fermentation and electrobioreactor cultivation using in situ water electrolysis where found suitable for carbon assimilation to desired products with engineered strains of R. opacus DSM 43205, but attenuation of product formation was observed in the electrobioreactor cultivations. The novel hosts resulting from the work contribute to the transition from CO 2 -releasing manufacture of chemicals to CO 2 -fixing bioprocesses. The cultivation methods and metabolic models developed in this project will facilitate further studies of still mostly unexplored lithoautotrophic microbial species.
Data availability material
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
Funding
This work was financially supported through Academy of Finland research grants MOPED (Decision No. 295883), Optobio (Decision No. 287011) and KNALLRED (Decision No. 342124), Business Finland research grant Fermatra (Diary No. 908/31/2016) and a joint research grant by the Technology Industries of Finland Centennial Foundation and the Jane and Aatos Erkko Foundation on "Feed and food from CO 2 and electricity-the research and piloting of future protein production." PJ and ML would like to acknowledge funding from the Academy of Finland (decision numbers 310514 and 321723, respectively). | 9,730 | 2020-08-27T00:00:00.000 | [
"Engineering",
"Biology"
] |
Tissue Engineering in Stomatology: A Review of Potential Approaches for Oral Disease Treatments
Tissue engineering is an emerging discipline that combines engineering and life sciences. It can construct functional biological structures in vivo or in vitro to replace native tissues or organs and minimize serious shortages of donor organs during tissue and organ reconstruction or transplantation. Organ transplantation has achieved success by using the tissue-engineered heart, liver, kidney, and other artificial organs, and the emergence of tissue-engineered bone also provides a new approach for the healing of human bone defects. In recent years, tissue engineering technology has gradually become an important technical method for dentistry research, and its application in stomatology-related research has also obtained impressive achievements. The purpose of this review is to summarize the research advances of tissue engineering and its application in stomatology. These aspects include tooth, periodontal, dental implant, cleft palate, oral and maxillofacial skin or mucosa, and oral and maxillofacial bone tissue engineering. In addition, this article also summarizes the commonly used cells, scaffolds, and growth factors in stomatology and discusses the limitations of tissue engineering in stomatology from the perspective of cells, scaffolds, and clinical applications.
INTRODUCTION
In the 1980s, Professor Joseph P. Vacanti and Robert Langer from the United States first explored tissue engineering research (Vacanti et al., 1988). In 1993, they defined tissue engineering in an article as "an interdisciplinary field that applies the principles of engineering and the life sciences toward the development of biological substitutes that restore, maintain, or improve tissue function" (Langer and Vacanti, 1993).
Nowadays, tissue engineering technology is booming and has become a popular research method for the reconstruction of damaged or missing tissues and organs (Fang et al., 2021;Farhat et al., 2021;Shang et al., 2021), and breakthroughs have been made in many fields (Figure 1) (Gosselin et al., 2018;Anandakrishnan and Azeloglu, 2020;Mirdamadi et al., 2020;Berbéri et al., 2021;Li et al., 2021;Scott et al., 2021). Therefore, we believe that tissue engineering technology will create extensive innovation in the field of stomatology. The basic principle of tissue engineering is to collect functionally related cells and plant them on a natural or synthetic scaffold with a certain spatial structure and induce cell proliferation through the influence of growth factors, thereby regenerating tissues or organs ( Figure 2) Dzobo et al., 2018;Dey et al., 2020).
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org November 2021 | Volume 9 | Article 662418 forming a functional vascular network ( Table 3) (Yang et al., 2012). In conclusion, the core of tissue engineering lies in the establishment of a perfect three-dimensional spatial complex that consists of scaffolds, seed cells, and growth factors ( Table 4).
TOOTH TISSUE ENGINEERING
The tooth, an indispensable organ to humans, consists of soft connective tissues, namely, the pulp in the pulp cavity, and three outer layers of mineralized hard tissue, such as enamel, cementum, and dentin, playing an important role in mastication, pronunciation, and aesthetics. Tooth development is accomplished by a series of epithelial-mesenchymal interactions and reciprocal inductions, which ultimately lead to cell differentiation and developmental space formation (Yuan and Chai, 2019). Tooth loss, which is caused by many reasons, such as dental caries, tooth agenesis, or trauma, is a common oral disease that seriously affects physiological functions and even increases the morbidity of gastrointestinal cancer (Ma et al., 2018), cardiovascular disease, and stroke (Cheng et al., 2018). Moreover, permanent teeth are not renewable once they fall off. At present, removable dentures and fixed dentures are commonly used in the clinic to repair missing teeth, but these traditional restorative methods suffer some flaws, such as causing discomfort and inefficient mastication (Hejazi et al., 2021). Hence, the construction of biological tissue-engineered teeth has emerged to solve these disadvantages. Tooth regeneration therapy for dental tissue repair and wholetooth replacement has been a long-term goal to achieve in dentistry.
Researchers have already made some progress during the regeneration of partial dental tissues. Regenerative endodontics (RE) mostly utilize the strategy of cell homing and transplantation to repair or replace necrotic tissue and regenerate dentine-pulp complex (DPC) (Morotomi et al., 2019). First, the main principle of cell homing is that the body's stem cells are recruited and induced to accumulate at the defective site, leading to endogenous tissue regeneration (Wang X. et al., 2018), but the mechanism and application prospects still require much research to clarify. In addition, cell transplantation is currently the main approach for achieving pulp tissue regeneration. A study combining pulp stem cells with granulocyte colony-stimulating factor (G-CSF) in a canine pulpectomy model found that pulp tissue containing vasculature and innervation filled the entire root canal, thereby achieving successful regeneration in pulp tissue (Iohara et al., 2013). There have also been some researchers attempting to develop a biomimetic tooth bud model with dental cells encapsulated within gelatin methacrylate (GelMA) hydrogel scaffolds to obtain a mineralized crown (Smith et al., 2017).
Simultaneously, whole-tooth bioengineering using embryonic tooth bud cells has been established in several animal models, including mice, rats, pigs, and dogs (Zhang and Chen, 2014). Cai et al. found that integration-free human urine-induced pluripotent stem cell (ifhU-iPSC)-derived epithelial sheets recombined with mouse dental mesenchyme could successfully regenerate tooth-like structures (Cai et al., 2013). Wang et al. proved the feasibility of whole-tooth regeneration in large animals by reconstructing single cells from the fourth deciduous molar tooth germ (p4) of pigs to bioengineer tooth buds in in vitro culture and in vivo transplantation in mouse subrenal capsules and jawbones. As a result, pig bioengineered tooth buds restore odontogenesis and develop into large tooth sizes . Ono et al. dissected canine permanent premolar (P2, P3, and P4) tooth germs from the mandible of beagles and then transplanted them into the alveolar bone socket of the same mandible to gain functional whole-tooth restoration by autologous transplantation of bioengineered tooth germ in a large animal model (Ono et al., 2017). Zhang et al. used decellularized tooth bud (dTB) scaffolds created from natural porcine tooth buds (TBs) and successfully formed mineralized whole teeth in miniature pig jaws in vivo .
These results indicate that tissue-engineered teeth have bright prospects in tooth regeneration and can effectively solve the oral problems posed by tooth loss. In future, emerging technologies will provide increasingly advanced ideas for tooth regeneration. Abbreviations: EGF, epidermal growth factor; BMP, bone morphogenetic protein; IGF, insulin-like growth factor; TGF-β, transforming growth factor-β; VEGF, vascular endothelial growth factor; PDGF, platelet-derived growth factor; VSMCs, vascular smooth muscle cells; bFGF, basic fibroblast growth factor; NGF, nerve growth factor.
PERIODONTAL TISSUE ENGINEERING
Periodontal tissue diseases are usually involved in periodontal inflammation and trauma, including destruction of the cementum, gingiva, periodontal ligament, and alveolar bone. The formation of periodontal pockets and the resorption of alveolar bone are typical manifestations of periodontitis and eventually develop into tooth loss. The most ideal periodontal treatment is to achieve complete functional regeneration of alveolar bone, cementum, and periodontal ligament to obtain new periodontal attachment (Iwata et al., 2014). Traditional periodontal therapy only removes bacteria and delays the disease process, but it is difficult to achieve periodontal regeneration. Different from traditional periodontal therapy, periodontal tissue engineering is a new concept for reconstructing defective periodontal tissues and organs and has already made rapid development in recent years. The traditional tissue engineering methods are based on combining scaffolding materials with seed cells. Mrozik et al. cultured and purified sheep PDLSCs in vitro, combined them with gelatin sponges, and implanted them into the periodontal defect of the second premolar, and the newly formed alveolar bone, cementum, and Sharpey fibers were significantly more abundant than those in the control group without stem cell inoculation (Mrozik et al., 2013). Fu et al. treated animal models of periodontitis with stem cells isolated from miniature pig deciduous teeth (SPDs) plus hydroxyapatite/ tricalcium phosphate (HA/TCP), and the loss of soft and hard tissue showed significant restoration after 12 weeks (Fu et al., 2014).
Frontiers in Bioengineering and Biotechnology | www.frontiersin.org November 2021 | Volume 9 | Article 662418 new alveolar bone, cementum, and periodontal ligament eventually formed (Wu et al., 2018). In terms of the processing and manufacturing of scaffolds, electrospinning technology is expected to provide more appropriate materials for tissue engineering. Higuchi et al. produced biodegradable membranes for the regeneration of periodontal tissue defects by electrospinning and sonocoating with nanohydroxyapatite particles (Higuchi et al., 2019). Sprio et al. fabricated hybrid superparamagnetic 3-layer scaffolds simulating the 3D environment of periodontium, which is conducive to boosting osteogenic and osteoconductive stimulation (Sprio et al., 2018).
Regarding cell culture, cell sheet technology (CST) is defined as a cell transplantation method that does not require scaffolding materials and can preserve intact extracellular matrix (Sprio et al., 2018). Some researchers transplanted cell sheets supported by electrospun polycaprolactone (CaP-PCL) scaffolds, and denuded root and alveolar bone formation occurred at the defect site after 4 weeks, confirming that the combination of PCL and CaP-PCL scaffolds can promote periodontal regeneration (Dan et al., 2014). All these results provide important insights into advancements in periodontal tissue engineering, and it is believed that with the development of periodontal tissue engineering, complete realization of periodontal regeneration will be full of infinite possibilities.
DENTAL IMPLANT TISSUE ENGINEERING
We have mentioned the importance of teeth to humans and some related studies on the use of tooth tissue engineering to repair tooth loss. Dental implantation is another common method to restore tooth loss. Implant restoration is performed in the alveolar bone of the edentulous area to implant the artificial tooth root, which replaces the natural tooth root, and subsequently repair the absent the tooth, which includes the artificial crown of the upper part and lower part of the support of implants (Figure 3). Although dental implants overcome some disadvantages of dentures and effectively repair defects caused by tooth loss, two conditions still hinder the development of dental implant technology: 1) insufficient local bone mass in the implants (Pardal-Peláez et al., 2021) and 2) insufficient soft tissue around the implants (Noh et al., 2021). Dental implant tissue engineering mainly uses tissue engineering technology and changes the alveolar bone and soft tissue environment before the implant is implanted into the alveolar bone in the edentulous area to achieve good osseointegration (Hao et al., 2021) and soft tissue augmentation.
On the one hand, tissue engineering contributes to overcoming the obstacles encountered with bone regeneration during dental implants. Yun et al. applied platelet-rich plasma (PRP) and human bone marrow mesenchymal stem cells (BMMSCs) to the bone defect area around the dental implant with porous hydroxyapatite (HA) as the scaffold and determined the bone regeneration ability of BMMSCs and PRP histologically. The data showed that the HA + BMMSC + PRP group had a higher bone density between 6 and 12 weeks (Yun et al., 2014). To investigate the role of umbilical cord mesenchymal stem cells (UCMSCs) in bone defects around the implant after immediate implantation, Hao et al. filled the defect on one side with plateletrich fibrin (PRF) and UCMSCs, while the other side was filled with PRF only as the control group and placed a titanium implant into the extraction socket. The results showed that UCMSCs can promote the formation of new bone in the bone defect area around implants; hence, UCMSCs can be used as excellent cells in the regeneration of bone defects after implantation (Hao et al., 2014).
On the other hand, concerning the problem of insufficient soft tissue, Simion et al. used a resorbable collagen matrix as a scaffold to carry recombinant human platelet-derived growth factor BB (rhPDGF-BB), and the results indicated that the soft tissue volume around implants increased moderately when applying a collagen matrix infused with rhPDGF-BB (Simion et al., 2012). Liu et al. employed acellular dermal matrix grafts conducive to increasing the attached gingiva and resin splint conducive to facilitating the healing of soft tissue attached to dental implants, and patients were satisfied with the reconstruction effects of dense connective tissue surrounding the implants after the operation . The patients with maxillary gingival recessions were treated with autologous fibroblast cell culture (AFCC) on a collagen scaffold placed under a coronally advanced flap (CAF), and soft tissues were significantly improved, suggesting that AFCC is a novel tissue engineering concept and a reliable therapy to solve the problem of insufficient soft tissues during defect repair caused by tooth loss (Milinkovic et al., 2015).
In summary, through the aid of tissue engineering technology, an increasing number of cells and scaffolds have been used for bone regeneration after dental implants, providing novel ideas for solving the problem of insufficient local bone mass in implants. Through the advantages of tissue engineering, such as less damage to the tissue around implants and good aesthetic effects, the development of oral implantology will be more vigorous in the future.
Cleft Palate Repair Tissue Engineering
Cleft palate is one of the congenital malformations with the highest probability of occurrence in oral and maxillofacial regions and can occur alone or together with cleft lip. Cleft palate not only manifests as soft tissue deformity but also bone tissue defects and deformities and may be accompanied by disorders of jaw development. In other words, the occurrence of cleft palate will have a huge impact on facial esthetics, and it will also cause dysfunction in language, eating, and breathing. Therefore, the repair of cleft palate is crucial, and surgery is one of the most important therapy methods. Traditional palatoplasty usually applies a loose incision to reduce tension, but bone surface trauma exposed after surgery will be scarred and can even lead to the restriction of development and deformity of the jawbone (Cantarella and Mazzola, 2020;Choi et al., 2021).
To solve or avoid the problems caused by traditional surgical methods during the healing of cleft palate, researchers have tried to find better ways to resolve cleft palate. Tissue engineering technology has been applied to repair cleft palate and has already obtained some results in many studies. Bajestan et al. explored the use of ex vivo expanded stem cell populations to treat large alveolar bone defects in patients with a history of cleft palate or craniofacial trauma. The results indicated that stem cell population therapy is safe, but the ability to completely reconstruct large alveolar defects is finite, so further optimization is needed to satisfy the requirements of cleft palate treatment (Bajestan et al., 2017). Sharif et al. developed a plasmafunctionalized electrospun composite polymer membrane, modified the fabricated membranes by plasma polymerization, and then implanted them in rats subcutaneously. The results showed that these membranes were biocompatible and angiogenic, providing the possibility for permanent closure of oronasal fistula (Sharif et al., 2019). Lee et al. created cell sheets derived from hMSCs and SHEDs for bone repair of cleft palate and found that the cell sheets led to calcification in vitro, which indicated that osteogenic stem cell sheets may become a new choice for the reconstruction of cleft palate (Lee J.-M. et al., 2019). Li et al. developed a tissueengineered graft for the repair of cleft palate in young rats by incorporating and integrating a synthetic polymer with a human decellularized amniotic membrane (DAM). This cell-free and absorbable graft could effectively guide soft and hard tissue regeneration and support palate regeneration and tissue growth (Li W. et al., 2019).
In summary, the use of tissue engineering techniques to repair cleft palate not only avoids scar tissue formation, wound contraction, and facial deformity caused by traditional cleft palate repair surgery but also effectively reconstructs and stimulates the healing of defects. In other words, we believe that there may be a new breakthrough for the repair of cleft palate through the in-depth study of tissue engineering technology.
Oral and Maxillofacial Skin or Mucosal Tissue Engineering
Skin and mucosal lesions caused by inflammation, trauma, tumors, or autoimmune diseases are very common in the clinical treatment of dentistry. Traditional autologous skin or mucosal flap transplantation is a popular method to treat lesions, but this method still has some disadvantages because the surgery causes donor site injury. Meanwhile, the source of homogenous skin or mucosal flap for transplantation is too limited, and the characteristics of exogenous tissue flap are different from oral and maxillofacial skin and mucosa. Even if the mucosal flap is successfully transplanted, it is difficult to maintain the secretion and lubrication function of the oral mucosa (Wang Z.-S. et al., 2016). To repair oral skin and mucosa lesions, an important task for researchers is to find alternatives to replace the traditional transplantation of autologous skin and mucosa, and the application of tissue engineering technology may provide a new direction in this research area.
Peramo et al. reported a three-dimensional tissue structure that can be used to repair lip defects, consisting of a continuous layer that contains the morphological features of lips: epidermal skin, vermilion, and oral mucosa, plus can produce tissues with similar anatomy as native human lips (Peramo et al., 2012). Yoshizawa et al. found that grafting ex vivo-produced oral mucosa equivalent (EVPOME) with live oral keratinocytes onto an intraoral mucosal wound can effectively promote epithelial regeneration in oral wounds (Yoshizawa et al., 2012). Bayar et al. created a construct containing a mucocutaneous junction with a transitional zone (vermilion) in vitro, which can produce a microvascular prelaminated flap in lip reconstruction, and the results showed that this construct could promote the phenotypic expression of regenerated tissue closer to native tissue (Bayar et al., 2016).
Some researchers preferred to combine flap surgery and tissue engineering technology to enhance the therapeutic effects in clinical treatment. Sieira et al. proposed a new approach to obtain keratinized mucosa over a fibula flap using fullthickness, tissue-engineered, autologous oral mucosa and found that this oral mucosa can restore native tissue and avoid peri-implant tissue complications during the repair of mucosal oral defects (Sieira Gil et al., 2015). Some research builds an oral mucosal model by using tissue engineering technology and evaluates the changes in the interface in implant soft tissue because the biotightness formed by the soft tissue around implants can impact the prognosis after dental implant treatments. Chai et al. developed a tissue-engineered three-dimensional oral mucosal model (3D OMM) by using primary human oral keratinocytes, fibroblasts, and a skinderived scaffold. The titanium implant was then inserted into the engineered oral mucosa, and the results showed that the tissue-engineered oral mucosa was similar to the normal oral mucosa. 3D OMM can form epithelial attachments on the titanium surface (Chai et al., 2010). Trichloroacetic acid (TCA) has attracted the focus of dental researchers due to its pivotal role during skin regeneration. Lee et al. injected TCA into open wound defects of the palatal mucosa in beagles and found that TCA promoted the healing and regeneration of wound defects in oral soft tissue by upregulating cell cycle progression, cell growth, and cell viability (Lee K. et al., 2019).
The aforementioned studies demonstrated that tissue engineering technology can more easily repair defects in oral and maxillofacial skin or mucosa. If tissue-engineered skin and mucosa are widely used in oral and maxillofacial clinical surgery, it can effectively avoid the challenges caused by the transplantation of traditional autologous skin or mucosal flaps.
Oral and Maxillofacial Bone Tissue Engineering
Oral and maxillofacial bone defects are diseases caused by congenital deformity, trauma, tumors, inflammation, or periodontal disease and mainly include alveolar, maxillary, and mandibular bone defects (Bangun et al., 2021;Lin and Kudva, 2021). Bone transplantation, guided bone regeneration membrane technology, stimulation of osteogenesis, and prosthetic repair are the main methods for the healing of defects. In the clinic, autologous bone is regarded as the "gold standard" for bone transplantation, but it also has some disadvantages. For example, autologous bone cannot be shaped randomly, which will impact the recovery and appearance of prognostic functions. Furthermore, the source is limited, and some complications may still occur after autologous bone transplantation. Recently, there have been many studies related to the healing of oral and maxillofacial bone defects by using bone tissue engineering technologies.
Khodakaram et al. compared the effects of fibrin glue scaffolds and autologous bone grafts during the healing of rabbit mandibular defects and found that they have similar osteogenic effects, so fibrin glue may be a good bone graft substitute and can be used to reconstruct maxillofacial bone defects (Khodakaram-Tafti et al., 2018). Shahnaseri et al. created a maxillary defect to simulate a human alveolar cleft model. One side of the defect was filled with hydroxyapatite/ β-tricalcium phosphate scaffolds that contained mesenchymal stem cells from the subcutaneous adipose tissue of dogs, and the other side was filled with autologous bone grafts collected from the tibia. The results showed that both grafts had good bone formation effects, so tissue engineering can be used as an alternative method to reconstruct bone defects (Shahnaseri et al., 2020). Redondo et al. inoculated mesenchymal stem cells from alveolar bone into BioMax scaffolds prepared from autologous serum and treated maxillary cystic bone defects under GMP conditions. The results showed that BioMax cross-linked serum scaffolds containing osteogenic differentiated MSCs gained a good effect during the repair of maxillary defects (Redondo et al., 2018). Zhang et al. constructed tissueengineered bones by using 3D printing molds and hightemperature sintering and produced nanoporous hydroxyapatite scaffolds that can convincingly repair in situ bone defects in experimental dogs .
The reconstruction of bone defects (especially critical sized bone defects) is difficult because the survival and growth of bone require the surrounding and internal blood vessels to provide oxygen and nutrients. Therefore, the vascularization of tissueengineered bone is very important during the repair of oral and maxillofacial bone defects. Matthias et al. successfully reconstructed large posttraumatic mandibular defects by using fresh frozen humeral allografts seeded with autologous bone marrow aspirate and vascularized them with a radial forearm flap (Matthias et al., 2019).
There are four main methods to reconstruct the blood supply of tissue-engineered bones: 1) using growth factors to promote the formation of new blood vessels (Omorphos et al., 2021); 2) culturing vascular endothelial cells as seed cells with the scaffold to form a complex unit and then implanting them in vivo to promote angiogenesis (Hancock et al., 2021); 3) combining microsurgery technology with bone tissue engineering to promote blood vessel formation (Vidal et al., 2020); and 4) using genetic engineering technology to promote blood vessel formation (Est-Witte et al., 2020). Selecting the appropriate tissue-engineered bone and constructing a good blood supply system will accelerate the healing of critical-sized bone defects. We believe that with the support of osteogenic cells, scaffolds, and growth factors, increasingly more tissue-engineered bone will be developed, and oral and maxillofacial bone defects will be repaired easily.
Limitations
We mentioned that the basic elements of tissue engineering technology are cells, scaffolds, and growth factors. Current relevant studies also obtained satisfactory reconstruction results, but there are still some disadvantages that limit the development of tissue engineering. If researchers can understand these limitations of tissue engineering correctly, it will contribute to the further research and application of tissue engineering and will be helpful for solving problems during the healing of defective tissues or organs.
Limitations of Cells
At present, the cells used for tissue engineering research mainly include xenogeneic cells, allogeneic cells, and autologous cells. Xenogeneic cells are taken from nonhuman body tissues and can be derived from animals such as pigs and dogs, which means that the use of xenogeneic cells may cause immune rejection. Although some researchers have overcome this immune rejection (Mohiuddin et al., 2014;Iwase et al., 2015), the safety and long-term therapeutic effects of xenogeneic cells still need to be further verified . Compared with xenogeneic cells, allogeneic cells can better overcome immune rejection (Goyer et al., 2019), but they may have some other disadvantages. In recent years, research on allogeneic cells has mainly focused on human embryonic stem cells derived from 1) naturally or artificially aborted embryos and 2) in vitro fertilized embryos. However, the application of human embryos is considered extremely cruel, immoral, and illegal in many countries. Autologous cells are taken from their own tissues and have the potential to regenerate various tissues and organs.
Autologous cells, unlike xenogeneic and allogeneic cells, will not cause immune rejection and have no ethics problems, but their application is restricted by their limited source and traumas caused during cell harvesting.
Limitations of Scaffolds
As previously summarized, natural biomaterials, synthetic polymer materials, or hydrogel scaffolds, all have some limitations. Because most natural biomaterials are derived from animal and have good biocompatibility during in vivo and in vitro experiments, they are still judged as nonautologous and labeled foreign bodies by the immune system and may eventually induce serious immunogenic responses after long-term use (Gilmartin et al., 2013). In addition, we should also pay attention to the instability of these biomaterials and the variability of molecular structures among different batches (Ige et al., 2012). Synthetic polymer materials generally exhibit poor cell affinity in previous studies (Zhao W. et al., 2016). The major disadvantage of electrospun scaffolds is the complexity of electrospinning and lack of defined control, so more reliable data from animal experiments are needed to support future practical applications (McClellan and Landis, 2016). Rasperini et al. reported the first human case in which a 3D-printed bioresorbable polymer scaffold was used to treat a periodontal osseous defect; however, the scaffold was exposed at 13 months and removed at 14 months because of a larger dehiscence and failure of wound healing (Rasperini et al., 2015). How to control the degradation rate of scaffolds to match the speed of defect healing and how to prepare layered scaffolds that can guide coordinated tissue regeneration may be the main directions of improvement approaches in the future.
Limitations of Clinical Application
Constructing a tissue engineering complex rich in living cells in vitro and then implanting it in vivo is the main process of transplantation of engineered tissue or organs. However, it also has some potential risks to the recipients of implanted engineered tissues or organs. When culturing the engineered complex in vitro, it is necessary to add fetal bovine serum, streptomycin, or other substances that can promote cell growth, but most substances are not derived from humans themselves, so the engineered complex may cause allergic reactions after implantation in vivo. On the other hand, absorbable polymer materials and some other types of materials are often selected as scaffolding materials to support seeding cells. Although most of these materials show no toxic effects, the long-term safety and immunological rejection of these materials are still major concerns for clinical application. For example, people prefer using allogeneic bone as a scaffold material, but it still has little antigenicity even when treated at extremely low temperatures. Therefore, we should further consider the safety and validity of engineered tissue or organs before they are applied in the clinic.
CONCLUSION
In summary, tissue engineering has broad prospects in stomatology and provides a valuable direction for future research on tooth loss, periodontal defects, dental implants, cleft palate defects, oral and maxillofacial skin or mucosal defects, and bone defects. It is believed that with the in-depth exploration of tissue engineering, ideal seed cell, better scaffold materials, and growth factors will be discovered and applied in effective clinical management of oral diseases in the future.
AUTHOR CONTRIBUTIONS
LC, HS, and XC collected data of seed cells, scaffolds, and growth factors; drew figures and made tables; and also drafted the overview of the manuscript. MS and JX collected the data on dental implant tissue engineering, cleft palate tissue engineering, and oral and maxillofacial skin, and mucosal tissue engineering. JL reviewed the manuscript. YZ conceived the presented idea, reviewed, and revised the manuscript, and also approved the final version. All authors agreed to be accountable for all aspects of the work. | 6,424.6 | 2021-11-08T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Absence of free carriers in silicon nanocrystals grown from phosphorus- and boron-doped silicon-rich oxide and oxynitride
Phosphorus- and boron-doped silicon nanocrystals (Si NCs) embedded in silicon oxide matrix can be fabricated by plasma-enhanced chemical vapour deposition (PECVD). Conventionally, SiH4 and N2O are used as precursor gasses, which inevitably leads to the incorporation of ≈10 atom % nitrogen, rendering the matrix a silicon oxynitride. Alternatively, SiH4 and O2 can be used, which allows for completely N-free silicon oxide. In this work, we investigate the properties of B- and P-incorporating Si NCs embedded in pure silicon oxide compared to silicon oxynitride by atom probe tomography (APT), low-temperature photoluminescence (PL), transient transmission (TT), and current–voltage (I–V) measurements. The results clearly show that no free carriers, neither from P- nor from B-doping, exist in the Si NCs, although in some configurations charge carriers can be generated by electric field ionization. The absence of free carriers in Si NCs ≤5 nm in diameter despite the presence of P- or B-atoms has severe implications for future applications of conventional impurity doping of Si in sub-10 nm technology nodes.
Introduction
The conductivity type and free carrier concentration of a semiconductor can be controlled via doping. Conventional impurity doping requires the incorporation of a suitable foreign atom on a lattice site and its ionization by thermal energy. Therefore, the energetic position of a dopant in the bandgap has to be close to the respective band edges. For Si, typical dopant ionization energies are in the range of ≈50 meV. If the size of the Si crystal approaches the exciton Bohr-radius, strong quantum confinement sets in and the valence-and conduction band ground state energies shift to lower and higher energies, respectively. As a consequence, the dopant ionization energies increase, which decreases exponentially the free carrier density [1]. If a doped Si-nanovolume is embedded in a matrix of lower permittivity (e.g., a dielectric), the dopant charge is not fully screened in the silicon and a Coulomb interaction with its image charge in the dielectric occurs. Irrespective of quantum confinement, this so-called dielectric confinement increased the dopant ionization energy even further [2]. At the nanoscale, the incorporation of an impurity on a lattice site is also subject to an increased formation energy as compared to the bulk, so that despite of thermal activation via, e.g., a high-temperature annealing process a significant fraction of potential dopants will remain on interstitial sites [3]. The decreasing number of Si-Si bonds per Si NC atom is a crucial point for the increase of dopant formation energies [4]. These factors impede efficient impurity doping of Si nanovolumes and complicate applications of Si NCs in devices based on p-n-junctions such as solar cells or light emitting devices [5,6]. Furthermore, semiconductor device fabrication technology nodes target the sub-10 nm scale in the near future, i.e., length scales where the effects described above will appear.
Si NCs of a few nanometres in diameter (i.e., quantum dots) represent a good model system to study doping at the nanoscale. They can be fabricated by various methods [7][8][9] and doped either during growth [7] or post-growth [10]. A recent review provides a broad overview of all available techniques and approaches [11]. Here, we focus on the Si NC growth via phase separation of PECVD-deposited, P-or B-doped silicon-rich oxide thin films via annealing at high temperatures. Additionally, we focus on comparatively lowly doped samples (on the order of 0.1-1 atom %) to study the classical electronic doping of Si NCs. In contrast, dopant concentrations up to 60% (also referred to as hyperdoping) were shown to induce localized surface plasmon resonances and metal-like free carrier densities [12][13][14][15]. The standard PECVD precursor gasses for silicon oxide are SiH 4 and N 2 O. Since Si-rich oxides have to be grown in O-depletion, some of the N-radicals present in the plasma react with the Si and are subsequently incorporated in the film. The resulting material is inevitably a Si-rich oxynitride (SRON) with in our case ≈10 atom % N [16]. Considering some safety issues, the oxidizing PECVD precursor gas can be replaced by O 2 , which allows for N-free Si-rich oxides (SRO) [17]. In both cases, small amounts of PH 3 or B 2 H 6 can be added during deposition to achieve P-or B-doped SRON or SRO, respectively.
In this study, we investigate the structural, optical and electrical properties of P-and B-incorporating Si NCs in both embedding dielectrics. We will show that, despite some minor differences in the four different sample configurations, no free carriers associated to a doping behaviour of P or B are observed.
Experimental
Superlattices of SiO 2 and SRO, or respectively, SRON were deposited on Si and quartz glass substrates by PECVD using processes described in [16,17]. Small amounts of 1% PH 3 /Ar, or respectively, 10% B 2 H 6 /SiH 4 were added to the Si-rich layers (both SRO and SRON) whereas in all cases the SiO 2 barrier layers remained undoped. All samples were annealed for 1 h in ultra-pure N 2 in a quartz tube furnace at 1100 °C (SRO) and 1150 °C (SRON). The thicknesses of the Si-rich oxide layers determine the mean size of the Si NCs to be of approximately the size of the initial layer thickness. Samples dedicated to luminescence and electrical measurements were postannealed in the same furnace in pure H 2 gas at 450 °C for 1 h to enable the passivation of dangling bond defects [18]. For electrical measurements, MOS capacitors were processed by thermal evaporation of Al-contacts. Molecular Cs + secondary ion mass spectrometry (MCs + -SIMS [19]; Cameca IMS-4f) with 3 keV Cs + (for SRO:P/B) and 5.5 keV (for SRON:P/B) Cs + was used to quantify sample composition including the P-or B-concentration by means of a calibrated standard. APT was measured with a LEAP™ 4000X Si (Cameca) with a pulsed UV laser (355 nm, 100 pJ, 250 kHz), a cooled specimen holder (≈40 K) and a chamber pressure of 10 −12 -10 −11 Torr. The atom detection efficiency is 57%. For data reconstruction IVAS™ software (version 3.6.6) was used. APT specimen (needleshaped tips attached onto the apex of a Mo support grid) were structured using an Auriga (Zeiss) focused ion beam scanning electron microscope. PL was measured using a LN 2 -cooled CCD camera attached to a single grating monochromator with excitation of a HeCd laser (325 nm line). Low-temperature PL spectra were measured from 5 to 300 K using a single-window continuous-flow liquid-He cryostat. TT-dynamics were measured in a standard pump and probe configuration by a laser system with 100 fs pulse length and 1 kHz repetition rate (Tsunami, Spitfire, Newport). The fundamental 800 nm output was partly used as a probe and partly frequency doubled to 400 nm and used as a pump. The measurements were done at room temperature. I-V and I-t was measured under accumulation bias, in dark and at room temperature, with an Agilent B1500A semiconductor device analyser and a Cascade M150 Prober in a shielded dark box.
Dopant concentration and -incorporation
At first, we determine the P-concentration as function of PH 3flux for SRO and SRON via MCs + -SIMS measurements. For this task, special samples were fabricated consisting of several 50 nm-thick SRO:P and SRON:P layers with different PH 3fluxes, separated by SiO 2 spacing layers (20 nm and 10 nm thickness, respectively). The SIMS depth profiles for as-deposited SRO:P and SRON:P are shown in Figure S1a and Figure S1b of Supporting Information File 1. It turns out that the P-concentration in SRO:P can be adjusted by the available PH 3 -flux from 0.59-4.61 atom %, while for SRON:P the range is limited to 0.18-0.71 atom %. In Figure S1c and Figure S1d of Supporting Information File 1 the SIMS depth profiles for similarly configured SRO:B and SRON:B layers are shown. Here, the B-concentration is controlled by the B 2 H 6 -flux in the range from 0.13-1.32 atom % for SRO:B and 0.02-0.14 atom % for SRON:B. When plotting the P-and B-concentrations in the Si-rich oxides as function of the flux ratio of PH 3 and SiH 4 or B 2 H 6 and SiH 4 , respectively, a quasi-linear dependence is found; see Figure 1. Generally, the dopant concentrations in SRON are lower than in SRO, which is caused by the very different precursor gas flows used in the SRON [16] and SRO [17] recipes. Nevertheless, for both dopants there is a concentration overlap region (indicated by grey boxes in Figure 1) for P in the range of 0.6 ± 0.1 atom % and for B in the range of 0.13 ± 0.02 atom %. Any direct comparison between doped SRO and SRON samples should hence be made in that overlap region to allow for equal nominal dopant concentrations. While the dopant-precursor flows are similar for each Si-rich oxide type, the average concentration of dopants is a factor of ≈5 lower for B than P, although the same amount of B 2 H 6 gas contains twice the number of dopant atoms compared to PH 3 . As a consequence, the incorporation efficiency of B in Si-rich oxides is approximately one order of magnitude lower than that of P.
Since SIMS cannot reveal the distribution of the dopants in the heterogeneous sample system of Si NCs and SiO 2 after annealing, atom probe tomography (APT) is used. APT was demonstrated to be a powerful method to reveal structural details of impurity elements in Si NCs [20,21]. In order to determine the incorporation of P-atoms into Si NCs, APT was measured for samples with SRO:P-0.59 atom % (for an image of a typical 3D-reconstruction see Figure 2a) and SRON:P-0.71 atom %. The mass spectra can be found in Figure S2 of Supporting Information File 1. For reference and to exclude critical mass spectra peak overlaps of, e.g., 31 P + , 30 Si 16 O 2 2+ , and 30 Si 1 H + an additional P-free sample was measured and no other signals influencing the ascription to P were found. Furthermore, the signals at 14 Da (Dalton, i.e., the unified atomic mass unit) and 28 Da indicate a very small influence of N on the mass spectra, which is consistent with its rather high ionization energy. Signals of 14 N 2+ at 7 Da and 14 N 3 + at 42 Da in the mass spectra are assigned to N-ion peaks but their contribution is too small to quantify the amount of N. The determination of P-ions in the mass spectra in this study was carried out without further data correction (e.g., for delayed evaporation events, so-called thermal tails). Still, the method to analyse the data of both SRON and SRO samples is the same, thus, P-concentrations are directly comparable to each other. In Figure 2b the proxigram analyses (proximity histograms) [22] of all detected NCs in the respective samples are shown. As selected in previous works, the Si NCs were created by 70 atom % Si iso-concentration surfaces [23]. A voxel size of 0.5 nm and a delocalization value of (x, y, z) = (1 nm, 1 nm, 1.5 nm) were used [24]. The bin size of the proxigram was set at 0.1 nm. Note that these parameters do not change the trend of the composition profiles of both samples. On first sight, no significant differences in the distribution of P-atoms in the NC-interior, at the Si/SiO 2 interface, and in the SiO 2 matrix are found. Especially the interior of the Si NCs and the near-interface region of the SiO 2 have almost identical P-concentrations of ≈0.5-0.7 atom %, while in the N-free SiO 2 matrix apparently less P is dissolved (≈0.2 atom %) compared to the oxynitride matrix (≈0.3 atom %). However, given the 20% (relative) higher initial P-concentration in SRON:P compared to SRO:P and a measurement uncertainty in the range of 0.1 atom %, this difference might be negligible. The overall P-distribution corresponds very well to previously observed trends for P in Si NCs [23][24][25][26]. We note that the ≈20% O-concentration in the NC-interior is an artefact from local magnification effects (LME) [27,28] which is generally observed in this material system [24][25][26]29,30]. Inevitably, this artefact also influences the exact values of the P-concentration, but since both samples are subject to the same LME the comparison discussed above is not influenced. Besides LME there are also other factors influencing the precision and resolution of APT such as inhomogeneous tip shape evolution during the measurement [31], delayed dissociation of molecules during the flight [32], and associated problems with the detection of neutral fragments [33]. Still, APT provides unique and very useful data inaccessible by any other method.
In Figure S3a of Supporting Information File 1 the NC-size distribution as derived from APT using iso-concentration surfaces of ≥70 atom % Si is plotted as well as the relative frequency of the number of P-atoms per NC. The number of P-atoms incorporated per NC and the P/Si-ratio, both as function of NC-volume, are shown in Figure S3b of Supporting Information File 1. While details of this data set are discussed in Supporting Information File 1, it can be readily concluded that the P-distribution and NC-incorporation is very similar for both SRO:P and SRON:P. Therefore, the presence of N in the oxynitride matrix has apparently no influence on the structural properties of P-doped NC-samples.
The SRO:B material has already been APT-analysed in [30] with the result that B is generally less likely to be incorporated deep in the NC core but more near the inner surface (in agreement with former theoretical [34,35] and experimental [26] evidence). Due to the maximum B-concentration in SRON:B of only 0.14 atom %, a statistically meaningful APT-analysis cannot be achieved. However, the absence of any significant differences between SRON:P and SRO:P suggests that the nitrogen in the oxynitride matrix will not have a notable influence on the B-distribution when SRON:B and SRO:B are compared.
Photoluminescence and transient transmission
Due to quantum confinement effects the ground state energy of Si NCs increases and the k-space overlap of electron and hole wave functions are significantly enhanced (Heisenberg's uncer- tainty principle). Therefore, excitons formed in Si NCs are subject to significantly higher radiative recombination probabilities, allowing the luminescence quantum yield to reach ≈30% [36,37], or even ≈60% for organically-capped NCs [38]. In the presence of a third charge carrier (a free electron from an ionized P-donor or a hole from an ionized B-acceptor) radiative recombination is very unlikely, since ultra-fast non-radiative Auger recombination will prevail [7]. On the other hand, the observation of PL quenching alone cannot prove the presence of free carriers since also dopant-induced defects can be involved [29,30,[39][40][41]. In Figure 3a, the dependence of the PL spectra on the P-concentration in SRO:P and SRON:P is demonstrated.
Here, all samples are H 2 -passivated and hence only the PL-quenching effect of P-incorporation is visible, not the PL-enhancement often observed for low P-concentrations and associated to dangling bond passivation by P [42]. Up to the level of ≈0.6 atom % P the PL intensity drops by less than 40% without any significant peak shift. According to the APT data shown above and in Supporting Information File 1 only the smallest NCs of each sample remain rather P-free and therefore potentially PL-active, which would implicate a strong PL blueshift, if Auger quenching by P-donors is considered. From Figure 3a and 3b, however, it is obvious that neither a spectral shift nor an efficient PL-quenching by P-incorporation takes place. In contrast, the PL remains very intense up to a P-concentration in SRO beyond >1 atom % P, i.e., vastly exceeding the solubility limit of P in Si. An almost complete suppression of PL occurs only for samples with 4.61 atom % P. Within the concept of PL-quenching by free-carriers induced by P-atoms in the Si NCs, it remains dubious why concentrations of several atom-percent should be required although APT detects in the majority of NCs already one or several P-atoms for samples with 0.6-0.7 atom % P. It appears more consistent with the available data that P-induced defects (e.g., from interstitial P in the Si NCs or SiO x :P-related states at the surface) cause the PL quenching, as supported by density functional theory (DFT) calculations [29,41]. In that context, it is also likely that for samples with >1 atom % P the P-concentration peak found at the Si/SiO 2 interface (cf. Figure 2b) reaches a level where a highly enriched P-shell forms on the NC-surface that enables efficient formation of non-radiative defect states.
For boron, strong PL-quenching is also not observed before the B-concentrations exceed >1 atom %, as shown in Figure 3c, and the same argumentation holds true for B-induced defects with states in the fundamental gap of Si NCs, as determined by DFT [30]. The as-measured PL peak intensity of the SRO:B sample set (open purple circles) is unfortunately obscured by variations in the initial excess-Si content (cf. Figure S1c of Supporting Information File 1), which directly influences the NC-density in those samples. Hence, the data set is corrected by the excess-Si content as measured by MCs + -SIMS (filled purple circles in Figure 3c; for details see caption of Figure S1 of Supporting Information File 1).
We note that the overall PL-quenching behaviour of Si NCs in doped SRO and SRON is similar. Hence, the presence of nitrogen in the matrix does not have a major impact on the formation of B-or P-induced centres that quench the PL.
Electronic doping, i.e., the generation of free carriers from dopants on substitutional lattice sites, requires thermal ionization, typically provided by the thermal energy at room temperature. Ignoring all the evidence of a defect-related PL-quenching of Si NCs containing P-or B-atoms, we would anticipate from low-temperature PL measurements of successfully, electronically doped Si NCs: (i) an increase in the PL-intensity as soon as free carriers are frozen out, accompanied by (ii) a spectral redshift due to the circumstance that within the NC-size ensemble the largest NCs are more easily doped than the smaller NCs, and (iii) significant differences in the PL peak behaviour when compared to undoped reference samples. In Figure 4, the T-dependent PL-peak analyses of spectra measured at very low exci- tation fluxes of ≈0.4 mW/cm 2 (to prevent over-excitation artefacts [43]) are presented. The spectra themselves can be found in Figure S4 of Supporting Information File 1. Figure 4a shows the relative PL-intensity as function of sample temperature (T) with respect to the intensity at 5 K, where all free carriers from potential dopants would be completely frozen out. The intensity trends follow roughly the low-excitation measurements shown in [44]. For approx. T > 150 K the intensity drops below unity due to the thermal activation of non-radiative recombination channels [36,44]. The relative PL-intensities of all samples with respect to their 5 K values end up in the same range of values at room temperature. Hence, a freeze-out effect of dopant-induced free carriers that quench the PL is not observed in accordance with dopant-induced defect states deep within the fundamental gap of the NCs. Figure 4b plots the PL peak shift with reference to 300 K, i.e., where a maximum of dopant-induced free carriers would quench the PL, which would preferentially affect the larger NCs with least confinement energy. Instead of a PL redshift expected for doped NCs with decreasing T, we observe a small blueshift related to the thermal contraction of the lattice and reduced electron-phonon interaction, which typically saturates around 100-200 K for lowly excited samples [43]. To add, the blueshift of the reference samples is slightly more pronounced for T < 150 K than that of the doped samples. The increase for T ≤ 25 K is most likely an artefact from overexcitation [43] despite the very low laser intensity. The reason for using two nominally identical reference samples (both are undoped SRO) in PL is due to the different number of NC-layers in the superlattice (10 for SRO:P and 20 for SRO:B). Any differences between the reference samples might therefore be interpreted as the scattering amplitude between different samples.
In Figure 5 we report the transient transmission dynamics of samples with 4.5 nm Si NCs made of (a) SRON:P with 0.71 atom % P and (b) SRO:B with 1.32 atom % B, i.e., samples with substantial incorporation of dopant atoms and significant PL quenching. For this measurement the excitation pump pulse wavelength was 400 nm (efficiently absorbed by the NCs) and the probe pulse wavelength was 800 nm, which is hardly absorbed by the NCs. However, if free carriers are present in the NCs, whether from optical excitation or from doping, the probe light is absorbed. The transmission of the sample at the probe wavelength in the unexcited state is measured as T 0 and the transmission as function of delay time between pump and probe (in steps of ≈100 fs) is plotted as The pump flux is chosen to generate only a few excitons per NC [45]. Specifically, 2.3 mJ/cm 2 (SRON) and 3.4 mJ/cm 2 (SRO) were used, which correspond to the excitation regime with normal Auger recombination of excitons, excluding bimolecular recombination [46]. If an additional free carrier (electron from P-donor or hole from B-acceptor) would be present in a Si NC, the generated exciton(s) could efficiently and quickly recombine with the unpaired charge carrier via an Auger process. This would substantially accelerate the reduction of the total carrier density and a doped sample would become transparent in shorter time as compared to an intrinsic sample. It is obvious from Figure 5a and 5b that neither for SRON:P nor for SRO:B accelerated TT-dynamics exist. When fitting the curves, best results are obtained for a two-exponential fit where the fast component τ 1 ≈ 0.5 … 1.5 ps for all samples is attributed to ultrafast carrier trapping and thermalization events. The long component τ 2 is associated with the actual Auger recombination of excitons and ranges from 5 to 8 ps without differences between doped or undoped samples. We conclude that no measurable initial carrier densities exist at room temperatures in P-or B-doped Si NCs in silicon oxide matrix.
The TT-results are presented for both H 2 -passivated and unpassivated states without distinctive differences, but one remark concerning the interaction of Si-doping and hydrogen shall be made: While P in the Si NC system is known to passivate dangling bonds (DBs) at the Si/SiO 2 interface [7,42] similar to a post-annealing in H 2 , hydrogen treatments have also been shown to deactivate P-donors and B-acceptors in heavily doped Si nanowires [47] and in the bulk [48][49][50]. However, this type of dopant passivation solely relies on very reactive atomic hydrogen (rather than molecular H 2 gas) and requires much lower temperatures of 100-150 °C to be efficient. When considering H 2 as used in our work, the effective (endothermic) dissociation enthalpy of the reactions 2 P-DB + H 2 → 2 P-H and 2 Si-DB + H 2 → 2 Si-H yield ca. 0.05 eV and ca. 0.09 eV per DB passivation, respectively [51]. This finding renders the P-H bond breakage to occur at significantly lower temperatures as used at H 2 anneals to passivate Si-DBs (450-500 °C).
Such a H-passivation mechanism of dopants requires their substitutional incorporation, which occurs apparently only in very small fractions for dopants in Si NCs (see section Electrical properties below). Therefore, neither from experimental evidence nor from fundamental considerations, it can be argued that the doping effect of P or B in Si NCs is obscured by H 2 -passivation. In contrast, the passivation of DB-defects at the Si/SiO 2 interface often improves the interpretability of the measured data.
Electrical properties
If free charge carriers would be present in the Si NCs, or if they are generated via ionization by an external electrical field, it is possible to detect their presence by I-V measurements on MOScapacitors with additional injection barriers [52,53]. Respective samples (injection-blocking MOS-capacitors) were fabricated with 10 nm-thick SiO 2 buffer and capping layers to prevent low-field injection of carriers from either substrate or gate, so that only transient displacement currents are measured. The cur-rent density over electric field (J-E) curves of B-or P-incorporating SRO and SRON samples, together with dopant-free reference samples, are depicted in Figure 6. None of the reference samples (dashed lines) shows a current peak in the low E-field regime, which excludes significant contributions to the displacement current by undoped Si NCs or their host matrices (pure oxide vs oxynitride). The J-curves of the P-incorporating NCs show a broad peak at ≈0.5 MV/cm for SRO:P and a sharper peak at ≈0.3 MV/cm for SRON:P. The peak character for SRO:P is less clearly expressed. Whereas the rising shoulders of both J-peaks are quite similar, only for the SRON:P sample the current density decreases behind the peak with a comparable slope but remains on a plateau for SRO:P. The origin of the J-signal is the ionization of substitutional P-atoms in Si NCs and the subsequent accumulation of the "free" charge carriers under the gate blocking oxide (cf. [29] and [52] for details). Following the calculations therein, we can estimate the P-ionization energy of the J-peak (or respectively the beginning of the J-plateau) to ≈200 meV, in accordance with literature values on ionization energies of nano-sized Si [54,55]. For SRO:P the J-plateau indicates a broader distribution of P-ionization energies towards even larger values. The comparison between SRO:B and SRON:B is even more surprising: While Si NCs from SRO:B do not show a J-peak but rather identical I-V behaviour as the respective reference, the SRON:B sample shows a weak plateau-like peak at ≈0.4 MV/cm. It is emphasized that the J-peak is found in the maximum possible B-doped SRON sample, but not in the max-doped SRO:B sample, despite an almost one order of magnitude higher B-concentration in the latter. Hence, B-doped Si NCs may be field-ionized in SRON, in contrast to our previous report on SRO [30] and in accordance with [53]. The holetunnelling properties of the matrix seem to determine the visibility of this effect and oxynitride seems to allow for a better hole transport (maybe assisted by N-related states), while N-free pure oxide apparently camouflages the field-ionization of B in SRO:B. In the case of electrons from field-ionized P-donors the situation is less critical since the conduction band offset of Si NCs and SiO 2 is only about half that of the valence band offset [56]. With the exception of SRO:B samples, there is also a 1-2 orders of magnitude higher J-level throughout a major part of the transient period between the doped samples and their respective references. By integrating the measured current over time, the corresponding total charge, generated by field ionization of dopants on Si-lattice sites in the NCs, can be estimated [52]. The free carrier densities of all samples at 0.2 MV/cm are shown in Figure 7b. Values of (4 ± 3) × 10 15 cm −3 were obtained for the reference samples (grey open circles); we note that these values are strongly influenced by dielectric relaxation. From the doped samples (black filled circles) only SRO:B has a similar value (being slightly below its reference). All other doped NC-samples have free carrier concentrations in the 10 16 cm −3 range. In order to exclude a contribution to the free carrier values from the dielectric relaxation, we subtract the reference-values to obtain the effective free carrier density (N F,eff , red spheres in Figure 7b). It is obvious that P dominates over B and SRON over SRO: Sample SRON:P has about twice the integral charge than SRO:P and SRON:B is an order of magnitude lower than SRON:P. In this context, point out that SRON:B has a B-concentration that is just 30% of the P-concentration in SRON:P. Although the initial dopant concentration in the Si-rich oxide is not the figure of merit but the substitutional incorporation in the NCs, these results still indicate that B-doping is less efficient than P-doping. This is underlined by Figure S1, Supporting Information File 1). We note that for SRON:P a field ionization doping efficiency of ≈4% was derived by dividing N F,eff with the number of P-atoms in the NCs measured by APT [29], which allows to estimate for Si NCs from SRO:P a field ionization doping efficiency of ≈2%, whereas for SRON:B in absence of measurable APT results no efficiency can be estimated.
Conclusion
Comparing oxynitride and N-free oxide as matrix for P-or B-incorporating Si NCs, no significant differences were observed structurally (SIMS, APT) or optically (PL, TT). Electrically (I-V, I-t) differences occur, which appear to be related to the insulating nature of the oxide matrix itself and the respective band offsets. For both dopants a slight room-temperature PL quenching is observed, becoming strong only for dopant concentrations beyond 1 atom %. This circumstance together with the absence of the spectral behaviour expected for doped NCs indicates that dopant-induced defects are the origin of PL quenching, in accordance with theoretical DFT predictions. Low-temperature PL spectroscopy and transient transmission measurements show no indications for dopant-induced free carriers in Si NCs. Electrical measurements on MOS-capacitors with additional injection blocking layers prove that E-fields in the range of 0.3-0.5 MV/cm are required to ionize the small fraction of lattice-incorporated dopants and to generate charge carriers. It was shown that the higher resistivity of the N-free oxide as compared to oxynitride masks the field-induce charge carrier generation from B-doped NCs. Comparing P-doped NCs in both matrices this effect was not found.
Summarizing the results reported here and previously [29,30,41,52] it turns out that P-and B-dopants in oxide-embedded Si NCs remain predominantly on interstitial lattice sites where they cannot be ionized by thermal energy at room-temperature, in agreement with the nanoscale-effects of self-purification, quantum-and dielectric confinement. This results in diminutive doping efficiencies [57]. We note that broader NC size distributions with tails towards the ≈10 nm range [58] or percolated nano-Si networks [53] are not subject to the same strong confinement conditions, so that measurable free carrier densities are likely.
The fundamental inability of efficient conventional impurity doping at the bottom end of the nanoscale requires different doping approaches that either relocate the dopants in the surrounding matrix (e.g., Si modulation doping by SiO 2 :Al) [59] or do not require impurities at all (e.g., electrically reconfigurable nanowire-FETs [60] or p/n-behaviour induced by energy offsets created by locally Si 3 N 4 and SiO 2 embedded Si-nanowires [61]).
Supporting Information
Supporting Information File 1 Additional figures. | 7,113.4 | 2018-05-18T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Monoamine oxidase and transaminase screening: biotransformation of 2-methyl-6-alkylpiperidines by Neopestalotiopsis sp. CBMAI 2030
High-throughput screening detected transaminases (TAs) and monoamine oxidases (MAOs) in fungi by applying a fluorogenic probe. Strains F026, F037, F041, F053, and F057 showed the highest enzymatic conversions (31, 60, 30, 40, and 32%, respectively) and where evaluated for their ability to transform piperidines. Strain F053 (Neopestalotiopsis sp. CBMAI 2030) revealed unusual enzymatic activity to deracemize 2-methyl-6-alkylpiperidines. Neopestalotiopsis sp. CBMAI 2030 was capable to convert 2-methyl-6-propylpiperidine, 2-methyl-6-butylpiperidine, and 2-methyl-6-pentylpiperidine in piperideine with 11, 14, and 24% conversion, respectively. The activity was enhanced by cultivating the fungus with 2-methyl-6-pentylpiperidine (38% conversion and 73% ee). Electronic supplementary material The online version of this article (doi:10.1007/s00253-017-8389-z) contains supplementary material, which is available to authorized users.
Introduction
Chiral amines are often biologically active and synthetically demanding chiral intermediates with applications in pharmaceutical and agrochemical industries (Koszelewski et al. 2010;Carr et al. 2003). Chiral 2-methyl-6-alkylpiperidine moieties are present in solenopsin alkaloids, which are the main components in Solenopsis ant venom. These compounds usually display an (2R,6S) absolute configuration (Pianaro et al. 2012), and their proposed biosynthetic pathway suggests the participation of transaminases (TAs) and imino reductases (IREDs) in the definition of the chiral centers (Leclercq et al. 1996). Nowadays, these stereo-controlled processes can be reproduced in lab by using the same enzyme families (TA-IRED cascade) and diketones as substrates (France et al. 2016). Additionally, a similar cascade can also be performed by monoamine oxidases from Aspergillus niger (MAO-N) and ω-transaminases (TA) (Reilly et al. 2014). These enzymes are cofactor-dependent; TAs depend on pyridoxal 5′-phosphate (PLP) and MAO-N on flavin mononucleotide (FMN). TAs catalyze transamination, transferring amino groups from amines or amino acids to amine acceptors, ketones, or α-ketoacids, and are relevant to amino acid production in microorganisms and animals (Koszelewski et al. 2010;Hwang and Kim 2004). MAOs have been detected in many organisms (Atkin et al. 2008a, b) and are responsible for the oxidative deamination of monoamines, resulting in hydrogen peroxide and imine formation to produce ketones or aldehydes (Atkin et al. 2008a, b).
In microorganisms, these enzymatic activities can be detected by applying high-throughput screening (HTS) assays. Most HTS methodologies detect signals from a fluorogenic or chromogenic probe in 96-well microplates, which reveal that Electronic supplementary material The online version of this article (doi:10.1007/s00253-017-8389-z) contains supplementary material, which is available to authorized users. the enzymatic reaction occurred (Reymond 2008). Assays detecting fluorescent signals are more sensitive and provide a linear response to the reaction progress without interference of colored products (Reymond 2006).
Here, we used HTS and a fluorogenic probe recently suggested by Lu et al. (2008) to monitor MAO and/or TA enzymes in fungi available from LaBioChem in house culture collection, UNICAMP. The enzymatic activity of the best strain was validated in the deracemization of 2,6dialkylpiperidines.
General methods
Commercially available reagents and solvents were acquired from Sigma-Aldrich, Acros, or Synth, and purified following standard procedures (Perrin et al. 1980) when necessary. Merck silica gel 60 (230-400 mesh ASTM) was used for Bflash^column chromatography, and thin-layer chromatography was performed using Merck silica gel 60 F 254 on aluminum foils, revealed by UV 254 nm lamp irradiation. Visualization spray reagents for TLC were either 10% w/v phosphomolybdic acid in ethanol or p-anisaldehyde, H 2 SO 4 , acetic acid in ethanol (1:2:1:100 v/v), or Dragendorff's solution containing bismuth(III)nitrate (0.85 g), tartaric acid (10 g), and potassium iodide (16 g) in distilled water (80 mL). 1 H NMR (600.17 MHz) and 13 C NMR (150.91 MHz) were acquired with a Bruker Avance III 600 (B 0 = 14.1 T), and 1 H NMR (400.13 MHz) and 13 C NMR (100.63 MHz) were acquired with a Bruker Avance III 400 (B 0 = 9.4 T). Deuterated chloroform (CDCl 3 ; 7.23 ppm), deuterated methanol (CD 3 OD; 3.35 and 4.78 ppm), and tetramethylsilane (TMS; 0.0 ppm) were used as a solvent and internal reference. Chemical shifts were expressed in δ (ppm) and the coupling constants (J) in Hertz (Hz). The GC-MS analyses were performed in an Agilent 6890 Series chromatograph coupled to a Hewlett-Packard 5973 mass spectrometer with an electron ionization source (EI) operating at 70 eV, and equipped with a fused silica capillary column HP 5 -M S ( 3 0 m × 0 . 2 5 m m × 0 . 2 5 μ m ) w i t h 5 % phenylmethylsiloxane. Helium was used as carrier gas (1 mL min −1 ), and the injector temperature was set to 250°C, detector temperature to 230°C, and an injection volume of 1.0 μL in splitless mode was used. The initial oven temperature was 50°C, increasing at 20°C min −1 to 290°C. The diastereomeric discrimination of compound 11 was achieved using an Agilent 6850 chromatograph equipped with a flame ionization detector and a fused silica capillary column Chrompack® with c h i r a l p h a s e C h i r a s i l -β -c y c l o d e x t r i n (25 m × 0.25 mm × 0.25 μm), H 2 carrier gas (2 mL min −1 ), injector at 180°C, detector at 150°C, and splitless mode injections of 1.0 μL. Mass spectrometry with electrospray ionization (ESI-MS) was performed using a Waters Quattro Micro TM API spectrometer. Samples of compounds 1 and 1d (10 μg/ mL in methanol) were applied by direct infusion using a 50 μL min −1 , and a 0.1 mL min −1 flux of 0.1% v/v formic acid in methanol was used. Nitrogen was the nebulizing gas, the analyses were performed in the positive mode, and the parameters were as follow: capillary voltage 3 kV, cone voltage 25 V, extractor voltage 3 V, RF lens voltage 0.5 V, source temperature 150°C, desolvation temperature 200°C, desolving gas flux 800 L/h, and gas flux at the cone 50 L/h. Hydrogenations were performed using a Parr 3926 shaker hydrogenation apparatus.
Microorganisms
Microorganisms (MOs) from our private collection were used in HTS stage, and the hit F053 was identified as Neopestalotiopsis sp. and deposited at CBMAI (Brazilian Collection of Microorganisms from the Environment and Industry) under the code name CBMAI 2030. For the HTS assays, the MOs were inoculated on Petri dishes containing malt extract agar (MEA) culture medium and incubated for 72 h at 30°C. The cells were suspended in 20 mM borate buffer pH 7.4 to a final concentration of 1 mg mL −1 .
For the biotransformations, MOs were transferred to an Erlenmeyer flask (500 mL) containing 200 mL of malt extract (ME) culture medium and incubated at 30°C and 200 rpm for 48 h. The cells were harvested by filtration under vacuum and used direct in the assays.
HTS assays
Screening with whole cells were performed using 7-(3aminopropoxy)coumarin (probe 1) as fluorogenic substrate and following a previously optimized protocol described by Bicalho et al. (2004). The HTS assays were developed in 96well polypropylene microtiter plates in quadruplicate, and reaction controls in duplicate. The microtiter plates were incubated at 30°C and 200 rpm. The fluorescent signal was monitored using a PerkinElmer EnSpire microtiter plate reader at 0, 24, 48, 72, and 96 h (excitation wavelength λ ex = 360 nm and emission wavelength λ em = 460 nm).
The results were expressed as percentage of conversion (%) applying Eq. 1. All results were averaged and conversion values are estimated, taking the positive control as 100%.
Evolution experiments of Neopestalotiopsis sp. CBMAI 2030
Erlenmeyer flasks (500 mL) containing compound 8 (10 mg), ME culture medium (200 mL), and Neopestalotiopsis sp. CBMAI 2030 cells were incubated at 200 rpm and 30°C for 48 h. The cells were filtered, washed with 100 mM Sørensen buffer pH 7.0, and added to the biotransformation reactions. After 15 days, the cells were harvested and added to a new ME culture medium containing compound 8 (10 mg). This procedure was repeated three times.
Umbelliferone (0.325 g, 2 mmol) and acetone (10 mL) were added to a two-neck round-bottom flask under nitrogen atmosphere. The mixture was cooled to 0°C and 1c (0.956 g, 5 mmol); potassium carbonate (0.47 g, 3.5 mmol) and sodium iodide (0.45 g, 3.0 mmol) were added. The reaction was kept under reflux for 18 h. The solvent was evaporated under reduced pressure and the crude residue was purified by silica gel column chromatography eluted with hexane:ethyl acetate (2:1 v/v) to produce 0.57 g of solid 1d, in 89% yield.
Molecular analysis
DNA extraction was performed following the protocol described by Raeder and Broda (1985). The isolate was screened for ITS loci using the ITS1 and ITS4 primers (White et al. 1990). Amplification reactions were performed using PCR with genomic DNA as a template, and the conditions were set as follows: an initial denaturation temperature of 94°C for 2 min, 30 cycles of denaturation at 94°C for 1 min, primer annealing at 55°C for 1 min, primer extension at 72°C for 3 min, and a final extension step at 72°C for 3 min and 4°C. Amplicons were purified using GFX PCR DNA and a Gel Band Purification Kit (GE Healthcare) and sequenced with BigDye Terminator (Life Technologies, USA) and an ABI3500XL Series Sequencer (Applied Biosystems) according to the manufacturer's instructions. The BioEdit Sequence Alignment Editor v. 7.0.5.3 (Hall 1999) was used to generate the consensus sequence, which was compared against the GenBank nucleotide database (http://www.ncbi.nlm.nih.gov) and CBS (http://www.cbs.knaw.nl/). The most similar sequences were selected, combined with the sequence from the isolate. After alignment using the CLUSTAL X software (Thompson et al. 1994), a phylogenetic tree was generated with MEGA software version 4.0 (Tamura et al. 2007). Neighbor-joining criteria (Saitou and Nei 1987) were used in the analysis, and bootstrap values were calculated for under 1000 pseudoreplicates.
Microorganism
Conversion a,b (%) F057 --a Determined by the area ratio of the product and internal standard area b Final values after 14 days of reaction forming units (CFUs). Further dilutions were performed until no more than 5 CFU was found. Then, 100 μL of the final diluted suspension was streaked onto a potato dextrose agar (PDA) layer on Petri dishes using a Drigalski spatula. The plates were incubated at 25°C until isolated growing colonies were observed. Later, microscopic examination was based on slide preparation by adding a drop of Lactophenol Cotton Blue to a microscope slide and subsequently transferring the fungal material from the culture (Crous et al. 2009). The ITS sequence of Neopestalotiopsis sp. CBMAI 2030 was deposited at GenBank with accession number KY696576.
MAO/TA high-throughput screening
The fluorogenic probe 7-(3-aminopropoxy)coumarin (1) was synthesized in 83% yield and was successfully applied to the high-throughput screening of 39 fungi in order to quickly select the best strain for TA and/or MAO enzymatic activities (Fig. 1).
The MAO/TA HTS results revealed that 25 fungi catalyzed the oxidative deamination or transamination of probe 1 at a relatively low conversion (Table 1). On the other hand, the fluorogenic probe was converted into umbelliferone with 30-60% conversion by five fungi (Table 1): F026, F037, F041, F053, and F057. These microorganisms were selected for further investigation regarding the biotransformation of 2,6-dialkylpiperidines.
The synthetic route for 6, 7, and 8 (Fig. 3) was based on the synthesis of racemic solenopsins (Pianaro et al. 2012). The 2,6-lutidin carbanion (10) was obtained by treating lutidine with butyl lithium, which promptly reacted with the appropriate alkyl halide to produce 2-methyl-6-alkylpiridines 6b, 7b, and 8b. In the second step, the pyridines were hydrogenated in the presence of 10% Pt/C to produce piperidines 6, 7, and 8. All products and intermediates were fully characterized.
Among the five fungi selected, only F053 showed good MAO activity when tested with 6, 7, and 8. The remaining microorganisms showed no activity with these piperidines ( Table 2).
The piperideines 6a, 7a, and 8a were detected in the reaction media after 7 days with 11, 14, and 24% conversion, respectively. Using mass spectrometry, 6a, 7a, and 8a were confirmed, by a characteristic fragment at m/z 111 that is rationalized by hydrogen rearrangement through a sixmembered ring intermediate, as in the McLafferty rearrangement (McLafferty and Turecek 1993) (Fig. 4).
Additionally, elongation of the 2-alkyl chain length seems to increase the conversion efficiency of the piperidine into piperideine. Therefore, piperideine 8a formation probably relies on a better acceptance of 8 by fungus F053, and this substrate was well suited for the investigation of evolutionary strategies in a laboratory setting, which is usually applied to bacteria with a short generation time. The experimental evolution of Escherichia coli for 40,000 generations in a glucoselimited medium is one of the best-known examples in the field (Bachmann et al. 2012). The present example is the evolution of the F053 fungus toward piperidine 8, and each evolution cycle involved 7 to 15 days. The adaptation lasted three generations to reach a 38% of 8a. The third generation produced 8a in higher conversion (38%) and in shorter time (7 days). No further increase was obtained in future generations.
Fungal analysis-identification
Molecular analysis of the consensus sequence of the F053 strain indicated that the isolate belongs to the genus Neopestalotiopsis. However, no data enabled identification at the species level. Microscopic observations on PDA culture medium revealed the presence of globosesubglobose pycnidial conidiomatas and discrete conidiogenous cells. Conidia are straight to slightly curved, 4-septate, thin-walled, versicolorous median cells. Hyaline cylindrical apical cells with tubular appendages (branched or not) and basal single, tubular, unbranched appendages were also present. Based on molecular and morphological aspects, the F053 strain was identified as Neopestalotiopsis sp. (Maharachchikumbura et al. 2014).
Neopestalotiopsis commonly occurs as a plant pathogen and together with Pestalotiopsis represents a fungal group known to produce a wide range of chemically novel metabolites (Maharachchikumbura et al. 2014). These species have been recovered from soil, polluted stream water, wood, paper, fabric, and wool (Guba 1961) and are associated with human and animal infections (Sutton 1999).
Discussion
HTS methods are quick to conduct millions of chemical or pharmacological tests. The results of these experiments provide starting points for the understanding of biochemical processes in biology and for the detection of enzymatic activities (Reetz 2002). Among the available HTS methodologies, those carried out in 96-well microplates with modified substrates (fluorogenic probes) to reveal the enzymatic reaction by signals of the fluorophore products are the most common (Reymond 2006). These assays provide quantitative results and can be automated by the use of microplate readers for data acquisition. Due to its successful application to human MAO-A and MAO-B enzymatic activities (Lu et al. 2008), 7-(3aminopropoxy)coumarin (1) was selected as a fluorogenic probe for MAO enzymatic screening in microorganisms, based on our previous experience with whole-cell HTS experiments and analogous probes (Gonçalves and Marsaioli 2014;Lima et al. 2015;Mantovani et al. 2010). In this work, probe 1 was effective on selecting five fungi with possible MAO activity. However, by using this fluorogenic assay, it is not possible to differentiate between MAO and TA activities. As shown in Fig. 1, probe 1 produces, either by enzymatic oxidative deamination (MAO) or transamination (TA), the same aldehyde product (3), which undergoes spontaneous βelimination releasing the fluorescent umbelliferyl anion (4).
Consequently, in order to overcome this issue, we performed biotransformation experiments using 2-methyl-6alkylpiperidines as substrates to confirm the MAO activities observed in the five fungi selected by HTS. Biotransformation of these compounds must take place only by MAOs, producing the respective piperideines, as TAs do not catalyze reactions involving secondary and tertiary amines.
Using this approach, a monoamine oxidase was confirmed in Neopestalotiopsis sp. CBMAI 2030 (isolate F053), which transformed all evaluated piperidines and, on its best, deracemized 2-methyl-6-pentylpiperidine into (2R,6S)-2methyl-6-pentylpiperidine in 38% conversion and 73% ee within 7 days. The piperideine enantiomeric excess was not accessed due to a lack of standards and enantiomeric discrimination methods. These results also attested the efficiency of probe 1 and the HTS technique in detecting new MAOs.
For years, 2-methyl-6-alkylpiperidines are the subject of research in our group. These alkaloids are also known as solenopsins because they are the main constituents of Solenopsis ant venom. The two stereocenters of the solenopsins allow the existence of four stereoisomers: trans (2R,6R or 2S,6S) and cis (2R,6S or 2S,6R), and the ratio of these diastereoisomers in the venom composition varies between Solenopsis worker ants and queens (Pianaro et al. 2012).
Concerning biocatalytic synthesis, Reilly et al. (2014) developed a chemo-enzymatic process for the production of chiral 2,5-disubstituted pyrrolidines using transaminase and monoamine oxidase from A. niger (MAO-N), obtaining excellent enantioselectivity and diastereoselectivity. With the activity showed by Neopestalotiopsis sp. CBMAI 2030, this chemo-enzymatic process could be applied to give access to enantiomerically pure 2,6-disubstituted piperidines (solenopsins), shading light into the role of the absolute configuration and the ant communication inside the nests. Fungal monoamine oxidase from A. niger and its variants are the main monoamine oxidases used as biocatalyst for the deracemization of secondary and tertiary amines (Carr et al. 2005;Dunsmore et al. 2006). The wild-type MAO-N is most active on simple straight-chain amines and shows poor activity with cyclic amines, which demanded several rounds of direct evolution to enhance its substrate acceptance (Carr et al. 2003). This characteristic creates the opportunity to search for new wild-type MAOs with distinct activity scope. Therefore, this report adds a new enzyme to the known MAO panel and the gene and heterologous expression of this novel MAO is under investigation. | 4,037.6 | 2017-06-28T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science"
] |
A Wide and Deep Exploration of Radio Galaxies with Subaru HSC (WERGS). VII. Redshift Evolution of Radio Galaxy Environments at z=0.3-1.4
We examine the redshift evolution of density environments around 2,163 radio galaxies with the stellar masses of $\sim10^{9}-10^{12} M_\odot$ between redshifts of $z=0.3-1.4$, based on the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) and Faint Images of the Radio Sky at Twenty-cm (FIRST). We use the $k$-nearest neighbor method to measure the local galaxy number density around our radio galaxy sample. We find that the overdensities of the radio galaxies are weakly but significantly anti-correlated with redshift. This is consistent with the known result that the relative abundance of less-massive radio galaxies increases with redshift, because less-massive radio galaxies reside in relatively low density regions. Massive radio galaxies with stellar mass of $M_*>10^{11}M_\odot$ are found in high density environments compared with the control sample galaxies with radio-non-detection and matched-stellar-mass. Less-massive radio galaxies with $M_*<10^{11}M_\odot$ reside in average density environments. The fraction of the radio galaxies associated with the neighbors within a typical major merger scale, $<70$ kpc, is higher than (comparable to) that of the control galaxies at $M_*>10^{11}M_\odot$ ($M_*<10^{11}M_\odot$). We also find that the local densities around the radio galaxies are anti-correlated with the radio luminosities and black hole mass accretion rates at fixed stellar mass. These findings suggest that massive radio galaxies have matured through galaxy mergers in the past, and have supermassive black holes whose mass accretion almost ceased at $z>1.4$, while less-massive radio galaxies undergo active accretion just at this epoch, as they have avoided such merger events.
INTRODUCTION
Corresponding author: Hisakazu Uchiyama<EMAIL_ADDRESS>Radio galaxies host active galactic nuclei (AGNs) that launch strong radio jets/lobes, which are expected to affect the star formation not only in the host halo but also the surrounding halos (Morganti et al. 2005;McNamara & Nulsen 2007;Bîrzan et al. 2008;Shabala et al. 2011;Yuan et al. 2016;Nesvadba et al. 2017). Thus, in order to understand galaxy formation and evolution throughout the history of the universe, it is key to understand how/where radio galaxies appear.
In the local Universe, radio galaxies are often found in rich environments, such as galaxy overdense regions or massive halos (e.g., Peacock & Nicholson 1991;Magliocchetti et al. 2004;Venturi et al. 2007). This fact is qualitatively consistent with radio jet/lobe triggering mechanism, Blandford Znajek process (Blandford & Znajek 1977). According to this process, the radio jet/lobe luminosity is a monotonically increasing function of black hole mass and spin which are built up by galaxy mergers (e.g., Fanidakis et al. 2011). Thus, radio galaxies are likely to easily appear in galaxy overdense regions, where galaxy mergers had been frequently experienced.
The galaxy overdensities around radio galaxies depend on their types significantly (Ramos Almeida et al. 2013;Ching et al. 2017). Radio galaxies can be divided into two types, high-excitation radio galaxies (HERGs) and low-excitation radio galaxies (LERGs). HERGs have strong radiation from accretion disk, caused by quasar mode accretion (Bower et al. 2006;Croton et al. 2006). This accretion channel dominantly occurs in the dark matter halos whose masses are ∼ 10 12 h −1 M ⊙ (Orsi et al. 2016). On the other hand, LERGs appear through relatively slow mass accretion onto supermassive black hole (SMBH), and are hosted by more massive halos than HERGs (Turner & Shabala 2015;Orsi et al. 2016). At z < 0.4, LERGs are observed to reside in galaxy denser regions than HERGs significantly (Ramos Almeida et al. 2013;Ching et al. 2017). HERGs are also reported to have lower stellar masses than LERGs on average (Ching et al. 2017).
The overdensities around radio galaxies are expected to depend on the age of the universe. The relative abundance of HERGs in radio galaxies is observed to increase with redshift in the range of z = 0.4 − 0.8 (Donoso et al. 2009). The abundance of radio AGNs hosted by star forming galaxies or less massive (M * < 10 11 M ⊙ ) galaxies is reported to rise with redshift (Delvecchio et al. 2018). Delvecchio et al. (2018) found that the radiatively-efficient accretion in radio AGN becomes to be dominant at z 1, suggesting the increase of the proportion of less-massive HERGs at this epoch. These facts imply that radio galaxies do not reside in the most overdense regions, on average, at z 1. Thus, it is key to examine the redshift evolution of the galaxy densities around radio galaxies over the wide stellar mass range, up to z 1.
Unfortunately, the redshift evolution of the radio galaxy environments are still shrouded in a deep fog. Kolwa et al. (2019) found that radio AGN, on average, reside in the overdense regions at z < 0.8 by using the data of 1 − 2 GHz Very Large Array (VLA) survey and the Sloan Digital Sky Survey (SDSS) Stripe 82 (Heywood et al. 2016). That wide coverage over ∼ 100 deg 2 allows us to characterize the radio galaxy environments statistically. However, the radio AGN sample and also the surrounding galaxy sample is biased toward massive galaxies (M * > 10 11 M ⊙ ) due to the shallowness of the survey (g < 24.5 and K 18.2). Malavasi et al. (2015) found that radio AGN always are located in rich and dense environments up to z ∼ 2 in the Cosmological Evolution Survey (COSMOS) field whose survey area is ∼ 2 deg 2 . This small area survey could miss rare objects and make a sparse redshift distribution that is insufficient to investigate the redshift evolution.
The data of a Wide and Deep Exploration of Radio Galaxies with Subaru HSC (WERGS, Yamashita et al. 2018;Toba et al. 2019;Yamashita et al. 2020;Ichikawa et al. 2021;Uchiyama et al. 2021) can overcome the difficulties. WERGS is the wide optical counterpart survey of radio galaxies with the optical depth down to i ∼ 26. This project is based on the wide and deep optical imaging data ) produced by Subaru/Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) survey and the 1.4 GHz radio continuum catalog of the Faint Images of the Radio Sky at Twentycm survey using VLA ("FIRST"; Becker et al. 1995;Helfand et al. 2015). The combination of these survey can make the statistical radio galaxy sample with a wide stellar mass range of ∼ 10 9 − 10 12 M ⊙ at z < 1.4.
In this paper, we examine the redshift evolution of radio galaxy environments up to z = 1.4, by using the data of WERGS. The radio galaxy sample is extracted from WERGS data. We also construct the control galaxy sample covering similar stellar masses and redshifts, but without radio detections. We can refer to the possible relation between the density environments and the radio jets, by comparing the radio galaxies with the control ones. The dependency of the density environments on their stellar masses are also investigated.
The paper is organized as follows. In section 2, we describe the data of HSC-SSP, FIRST and WERGS. The construction of the radio galaxy and control galaxy samples is also described. In section 3, the method to measure the local densities around radio and control galaxies is explained. The results of the redshift evolution of the radio galaxy environments and the possible dependency of the local densities on the stellar masses of the galaxies are shown in section 4. The implications of our results are discussed in section 5. Finally, in section 6 we summarize our findings. We assume the following cosmolog-ical parameters: Ω M = 0.27, Ω Λ = 0.73, H 0 = 70 km s −1 Mpc −1 , and magnitudes are given in the AB system.
Data
The Subaru HSC-SSP survey Kawanomoto et al. 2017;Komiyama et al. 2018;Furusawa et al. 2018;Bosch et al. 2018;Ivezic et al. 2008;Axelrod et al. 2010;Jurić et al. 2015;Schlafly et al. 2012;Tonry et al. 2012;Magnier et al. 2013) is an unprecedented deepand-wide optical survey using HSC with 116 2K × 4K Hamamatsu fully-depleted CCDs and the field-of-view of 1. • 5 diameter (Miyazaki et al. 2012(Miyazaki et al. , 2018. In this present study, we use the "wide layer" data of DR S16A , which consists of wide field image of > 200 deg 2 with a median seeing of 0. ′′ 6 − 0. ′′ 8 and is taken by the five optical filters of grizy-bands. The 5σ limiting magnitudes of grizy-bands for point sources measured in 2.0 arcsec apertures are expected to be 26.5, 26.1, 25.9, 25.1, and 24.4, respectively. We use cModel magnitudes, which are measured by fitting two components that are PSF-convolved galaxy models (de Vaucouleurs and exponential) to the source profile (Abazajian et al. 2004).
The survey area of VLA FIRST is ∼ 10, 000 deg 2 , and cover the HSC-SSP survey area. The images produced have a typical root-mean-square photometric noise of 0.15 mJy. The angular resolution of the FIRST survey is 5.4 arcsec.
WERGS
We describe the key steps of the construction of the WERGS radio source catalog (Yamashita et al. 2018).
The HSC-SSP sources were extracted from the wide layer data of HSC-SSP. The edge of each independent field was masked because the limiting magnitudes at the edges are relatively shallower than those in the central regions. The HSC-SSP flags summarized in Yamashita et al. (2018) were imposed for the sources to remove fake detections. The signal to noise ratio (SN R) for the riz-bands were required to be greater than five. As a result, 23,795,523 sources were found in the effective area of 154 deg 2 (hereinafter referred to as "HSC sources").
The HSC-FIRST sample was constructed by matching the HSC sources with the FIRST sources. The FIRST sources with > 1 mJy radio flux and the P (S) of < 0.05 were extracted from the final release FIRST catalog (Helfand et al. 2015). Here, P (S) is a probability that the source is a side lobe of a nearby bright source. 7,072 FIRST radio sources were found in the HSC-SSP wide layer. Then, the radio sources were matched with HSC sources using a search radius of 1.0 arcsec, which is the separation where the contamination rate and completeness function intersect. The contamination rate and the completeness were estimated to be 14 % and 93 %, respectively. As a result, 3,579 HSC-FIRST sources were found in the wide layer (Yamashita et al. 2018).
Radio galaxies
We extract radio galaxies from the HSC-FIRST sources using the rest-frame 1.4 GHz luminosities, L 1.4GHz [W Hz −1 ] according to Ichikawa et al. (2021). The L 1.4GHz of the HSC-FIRST sources were calculated to be ∼ 10 22.0−27.5 W Hz −1 assuming a radio spectral index of α = −0.7 with the form of f ν ∝ ν α (Yamashita et al. 2018). The redshift of each source is calculated using the photo-z calculated by Mizuki spectral energy distribution (SED) fitting code (Tanaka et al. 2018). We use only the HSC-FIRST sources with L 1.4GHz > 10 24 W Hz −1 . This luminosity cut can effectively split radio galaxies and star forming galaxies (e.g., Condon et al. 2013). Point sources are excluded as quasars using the same method as Yamashita et al. (2018). This reduces the number of the sources to 2,918.
At least one of the HSC filters can detect the flux at the bluer or redder side of the Balmer break wavelength of the sources at z = 0.3−1.4 (e.g., Ishikawa et al. 2020). For the photo-z accuracy, we select galaxies with photoz error σ z < 0.1(1 + z) and reduced χ square χ 2 ν < 3, based on Yamashita et al. (2018), Toba et al. (2019), and Ichikawa et al. (2021). As a result, we obtain 2,163 radio galaxy candidates at z = 0.3 − 1.4 (hereinafter referred to as "radio galaxies"). The radio galaxies have stellar masses, M * [M ⊙ ], of ∼ 10 9−12 M ⊙ estimated in the Mizuki SED fitting code. The redshift and stellar mass distributions of the radio galaxies are shown in Figure 1.
z & M * matched control sample
A control galaxy sample without radio emission is constructed to compare the environments at between radio galaxies and radio-quiet galaxies. First, we impose the same photo-z conditions as the radio galaxies on the HSC sources. Then, we extract HSC sources (hereinafter, "control galaxies"") that have the same redshift and stellar mass as each radio galaxy but have no detection in FIRST. The absolute values of the difference in redshift and stellar mass between a radio galaxy and a corresponding control galaxy, ∆z and ∆M * [M ⊙ ], obey the conditions ∆z < σ rg z and ∆M * < σ rg M * , respectively. stellar mass of the radio galaxy, respectively. If multiple control galaxies are selected for a radio galaxy, one control galaxy is randomly selected. We confirm that there is at least one corresponding control galaxy for each radio galaxy. As a result, 2,163 control galaxies are selected. Figure 1 shows the stellar mass and redshift distributions of the control galaxies. There is no difference in the distributions between the radio and control galaxy samples statistically. The D-and P -values for the redshift (stellar mass) distribution between the radio and control galaxies are 0.03 (0.03) and 0.51 (0.18) in the Kolmogorov-Smirnov test, respectively.
Galaxies for density measurements
We construct a galaxy sample to be used in measuring the density environments around the radio and control galaxies. The same photo-z conditions are imposed on the HSC sources as on the radio galaxies. We use photoz sources with the stellar masses of > 10 10.1 M ⊙ . The stellar mass cut corresponds to the 70 % completeness limit of the stellar mass functions at z = 1.4 in the Cosmic Evolution Survey (COSMOS) field (Ishikawa et al. 2020). As a result, we obtain 2,080,217 galaxies at z = 0.3 − 1.4 for density measurement (hereinafter referred to as "density galaxies").
Radio galaxy with IR photometry
IR photometry for our radio sample is taken from Toba et al. (2019). The CIGALE (Code Investigating GALaxy Emission; Burgarella et al. 2005;Noll et al. 2009;Boquien et al. 2019) SED fitting with the stellar, star formation, AGN, and radio components was conducted for the HSC sources in Toba et al. (2019). The IR data used in the SED fitting comprises of data from the VISTA Kilo-degree Infrared Galaxy Survey (VIKING; Arnaboldi et al. 2007), the Wide-field In- a Correlation coefficient of Spearman rank correlation test.
b P -value of Spearman rank correlation test. (Magorrian et al. 1998;Marconi & Hunt 2003). According to Ichikawa et al. (2021), sBHAR can be expressed by a function of M * and λ Edd : Kormendy & Ho 2013). The number of the radio galaxies with L AGN and sBHAR is found to be 1257 (hereinafter, we call the subsample to "IR radio galaxies").
MEASUREMENT OF GALAXY ENVIRONMENT
We use the k-nearest neighbor method to measure the local density environments of the radio galaxies and control galaxies. The details of the method are as follows.
The local density of a radio/control galaxy, Σ k radio/control [pkpc −2 ], is estimated through the following equation. a Correlation coefficient of Spearman rank correlation test.
b P -value of Spearman rank correlation test. Figure 2. The redshift evolution of the overdensities of the radio galaxies and control galaxies. Left, middle, and right panels show the overdensities with k = 5, 2 and 1, respectively. The median and the standard error of median of the overdensities of the radio (control) galaxies in each redshift bin are shown by the red (blue) open circle and error bar, respectively. The grey dots indicate the overdensities of the radio galaxies.
where, d k [pkpc] is the projected distance from a radio/control galaxy to the k-th nearest density galaxy within the redshift range of [z − σ z , z + σ z ] (Lai et al. 2016), where z and σ z are the redshift and its error of the radio/control galaxy, respectively. Lai et al. (2016) demonstrated that when the photometric redshift error of a galaxy is employed as a redshift slice, the measured overdensity of a galaxy sample with photometric redshifts can best trace the true one. In the calculation of the local density (2), we correct it by the fraction of the non-masked projected area within a circle around a radio/control galaxy with a radius d k [pkpc]. Then, the overdensity (1 + δ k radio/control ) for a radio/control galaxy is defined by where, Σ k field [pkpc −2 ] is the median local density of photo-z galaxies within the same redshift range as the radio/control galaxy. If 1 + δ k radio is higher than unity, the density around radio galaxies is overdense compared to field. We choose the overdensities with k = 1, 2, and 5 to compare the local densities in between different physical scales. By using the defined local densities, we examine their redshift evolution up to z = 1.4.
The medians of the projected distances d k are about 212, 375, and 697 pkpc for k = 1, 2 and 5 for the radio galaxy sample, respectively. It is important to compare the projected distance to the nearest neighbor, d k=1 , with the typical major merger scale < 70 pkpc (Larson et al. 2016). This comparison could allow us to understand whether the triggering of radio galaxies is associated with major mergers or not (section 5.1).
Redshift evolution of density environments
We show the measured overdensities of the radio galaxy sample and the control galaxy one as a function of redshift in Figure 2. We find that at z < 0.9 the median 1 + δ k radio for all k is significantly higher than unity. The excess decreases as k increases. These suggest that the radio galaxies at z < 0.9, on average, reside in galaxy overdense compact regions. On the other hand, at z > 0.9, the median overdensities of the radio galaxies are close to unity for all k. These results suggest that the ambient densities of the radio galaxies are negatively correlated with redshift. The Spearman rank correlation test supports this negative correlation. The results of the test are summarized in Table 1.
For the stellar-mass-matched control sample, the overdensities for all k are confirmed to be, on average, around unity over all the redshifts that we investigate. This suggests that the radio jet launches are related with the galaxy overdensities.
Dependence of the overdensities on the stellar masses
The dependence of the overdensities of the radio galaxies on their stellar masses is shown in Figure 3. We divide the radio/control galaxies into four subsamples with z = 0.3 − 0.5, 0.5 − 0.8, 0.8 − 1.1, and 1.1 − 1.4, in order to exclude the redshift dependency on the overdensities. At the low stellar mass regime (log M * /M ⊙ < 11), the median overdensities of the radio and control galaxies are comparable, and are around unity for all k. The control galaxies reside in the high density regions in the high stellar mass regime of log M * /M ⊙ > 11. The ambient densities of radio galaxies are, on average, significantly higher than those of the control galaxies at log M * /M ⊙ > 11 for the case of k = 1. The positive correlations between stellar mass and density are found to be significant using the Spearman rank correlation test as summarized in the Table 2. The overdensities of the radio galaxies are more strongly correlated with the stellar masses compared to the control ones. This overdensity excess decreases for k = 2 and 5. There are no signifi-cant differences of the overdensities k = 2 and 5 between the radio and control galaxies at z > 0.8.
We can compare our findings with previous studies. Kolwa et al. (2019) found that massive radio AGNs, on average, reside in the kpc-scale overdense regions. Malavasi et al. (2015) studied the environments of the radio AGNs with M * > 10 10 M ⊙ by the deep but narrow survey in the COSMOS field and found that the radio AGNs reside in high density regions compared to the non-radio sources with the same stellar masses. These results are consistent with our result of the massive radio galaxies.
Redshift evolution of radio galaxy environments
We found the weak and negative correlation of the overdensity of the radio galaxies with redshift at z = 0.3 − 1.4. At z < 0.9, the radio galaxies on average reside in the galaxy overdense regions, while at z > 0.9, the overdensities of the radio galaxies approach to be around unity. Kolwa et al. (2019) found that at z < 0.8, radio AGNs exist in the overdense regions. We, for the first time, have given a statistical insight that the overdensities of radio galaxies tend to decrease with redshift at z > 0.9. This can be explained by the increase of the relative abundance of less-massive HERGs with redshift, assuming that the local observational result that the less-massive HERGs reside in the relatively low dense regions compared to massive LERGs (Ching et al. 2017) is valid at z = 0.3 − 1.4. In fact, Donoso et al. (2010) found that the proportion of less-massive HERGs increases with redshift. Delvecchio et al. (2018) also suggested that at z 1 the accretion disk of radio AGN tend to be radiatively efficient. This means the increase of the population of HERGs in radio galaxies beyond z ∼ 1. In our WERGS radio galaxy sample, the radio galaxies with high sBHAR, that is, HERG-like galaxies are dominantly distributed at z 1 (Toba et al. 2019;Ichikawa et al. 2021). The stellar masses of the radio galaxies are also found to be significantly anti-correlated with redshift. The correlation coefficient, ρ, and P -value in the Spearman rank correlation test are −0.17 and 1.2 × 10 −9 , respectively. These results suggest that the relative abundance of less-massive HERGs in the radio galaxy sample is high at z 1, and thus, the overdensities of the radio galaxies tend to decrease with redshift.
5.2.
Dependence of the density environments on the SMBH states of radio galaxies Ching et al. (2017) reported that HERGs tend to reside in the lower density regions compared to LERGs at Figure 3. The dependence of the overdensities of the radio/control galaxies on the stellar masses at z = 0.3 − 0.5, z = 0.5 − 0.8, z = 0.8 − 1.1, and z = 1.1 − 1.4 from left to right. From the top row to the bottom, the overdensities with k = 5, 2 and 1 of the radio and control galaxies are shown. The median and the standard error of median of the overdensities of the radio (control) galaxies in each logarithmic stellar mass bin are shown by the red (blue) open circle and error bar, respectively. The grey dots show the overdensities of the radio galaxies. z < 0.4. If this result is valid even at z = 0.3 − 1.4, this implies that the ambient densities of the radio galaxies are anti-correlated with the radio luminosities and the mass accretions onto their SMBHs, because HERGs have high SMBH accretion rates and radio luminosities compared to LERGs (e.g., Heckman & Best 2014;Miraghaei & Best 2017). In this subsection, we discuss the possible correlation between the ambient environments and such AGN states of radio galaxies at higherz. Figure 4 shows the overdensity of the IR radio galaxies as a function of the bolometric AGN luminosity L AGN . Objects with higher L AGN have higher mass accretion on their SMBHs. The overdensities with all k, on average, tend to be higher than unity at the faint-end regime of the AGN luminosities. As the AGN luminosities increase, the overdensities decrease and eventually reach around unity at all the redshift bins. However, these relations are not significant according to the Spearman rank correlation test (Table 3).
Note that the AGN luminosities strongly depends on the black hole masses (e.g., Woo & Urry 2002) and thus, the stellar masses. In order to capture the mass accretion rate onto SMBH, we should use the AGN luminosities normalized with the stellar masses (that is, sBHAR). Figure 5 shows the relation between the overdensities and sBHAR of the IR radio galaxies. Both are negatively correlated with each other. The weak but significant negative correlation between the sBHAR and overdensities is guaranteed by the result of the Spearman rank correlation test (Table 3). At the low sBHAR regime, the overdensities are, on average, significantly higher than unity. On the other hand, at the high sBHAR regime, the overdensities of the IR radio galaxies are comparable to be around unity. The fraction of the IR radio galaxies with λ Edd ≥ 1 tend to increase with redshift (∼ 8 % at z < 0.8, and ∼ 15 % at z > 0.8).
We further examine the dependence of the overdensities on the radio luminosities, L 1.4GHz , which can trace the radio jet power (e.g., Fanidakis et al. 2011). As shown in Figure 6, no correlation between the overdensities and radio luminosities are found. No significant correlation between them is also found by the Spearman rank correlation test (Table 3). This is consistent with the result of Kolwa et al. (2019) who found no correlation between the surrounding densities and radio luminosities of radio galaxies at z < 0.8. As is the case of the AGN luminosity, the radio luminosity depends on the black hole mass and thus stellar mass (e.g., Fanidakis et al. 2011). In order to separate HERGs and LERGs in the radio luminosity space more effectively, the radio luminosity should be normalized with the stellar mass (e.g., Miraghaei & Best 2017). Figure 7 shows the relation between the overdensities and normalized radio luminosities L 1.4GHz /M * [W Hz −1 M −1 ⊙ ] of the radio galaxies. The radio galaxies with the faint-end normalized radio luminosities, on average, tend to reside in the overdense regions. The surrounding densities of the radio galaxies tend to decrease as the normalized radio luminosities increase. The local overdensities of the bright-end radio galaxies converge to unity or less than unity. The Spearman rank correlation test proposes a significant negative correlation between the normalized radio luminosities and the over-densities of the radio galaxies as summarized in Table 3.
To summarize, the radio galaxies with low mass accretion and low radio luminosity tend to reside in galaxy overdensity regions, on average, at fixed stellar mass. These findings are qualitatively consistent with the result at z < 0.3 found by Ching et al. (2017) that LERGs reside in the higher density regions compared to HERGs. Our results suggest that the correlation of the SMBHs and ambient densities of radio galaxies are guaranteed up to z = 1.4. The anti-correlation between the radio jet luminosities and surrounding densities of the radio galaxies is also consistent with the theoretical model of radio jets proposed by Kawakatu et al. (2009), if the core matter densities of the radio galaxies are correlated with the surrounding galaxy densities. This model shows that if the matter densities in the core of the radio galaxies are relatively high, the radio jets launched from the radio galaxies cannot expand beyond host halo due to the energy loss of the radio jets by the interaction with the surrounding matters. This would result in the LERG-like radio galaxies.
Triggering of radio galaxy and role of the local environment
Triggering of radio galaxy is closely associated with the local density environment (e.g., Kolwa et al. 2019). Especially, the distance to the nearest neighbor can be a good measure to diagnose whether radio galaxy triggering is linked with galaxy mergers or secular process (Ching et al. 2017). In this subsection, we examine if d k=1 of the radio galaxies are typical major merger scale, and discuss whether the radio galaxies are powered by mergers.
In section 4.2, we found that the massive radio galaxies with log M * /M ⊙ > 11 reside in high density regions compared to control galaxies. The density enhancements around radio galaxies are most prominent when k = 1. These facts suggest that the projected distances from the radio galaxies to the nearest neighbors, d k=1 , are, on average, small compared to the control galaxies at the massive-end. Upper panel in Figure 8 shows the dependence of d k=1 of the radio and control galaxies on the stellar masses. We find that the median d k=1 of the radio galaxies is significantly smaller than that of the control galaxies at the stellar mass regime of log M * /M ⊙ > 11.
We define the pair fraction, f pair (M * ) = n d<70 (M * )/n tot (M * ), where n d<70 (M * ) is the number of the radio/control galaxies with d k=1 < 70 pkpc, which corresponds to the separation of two galaxies at the beginning of a major merger (Larson et al. 2016), and n tot (M * ) is the total number of the radio or control galaxies in a given stellar mass regime of log M * ± 0.5. Lower panel in Figure 8 shows the pair fraction for the stellar masses of the radio and control galaxies. The pair fraction is estimated to be ∼ 0.1 − 0.2 which is comparable to the results of previous studies at this epoch (e.g. Lotz et al. 2011). We find that the pair fraction of radio galaxies is significantly higher than that of the control galaxies at log M * /M ⊙ > 10. This suggests that the triggering of a significant fraction of massive-end radio galaxies is associated with galaxy mergers. The values of d k=1 and f pair (M * ) for each stellar mass are summarized in Table 4.
Less massive radio galaxies appear to have a different triggering mechanism to the higher mass radio galaxies, because in log M * /M ⊙ < 11 the pair fraction and d k=1 (or local densities) of the less-massive radio galaxies are comparable to those of the control galaxies within 1σ error (Figure 8, and section 4.2). We also found that in section 5.2, the radio galaxies with high black hole accretion rates tend to reside in the low-density regions. These findings imply that high mass accretions onto SMBHs of less-massive radio galaxies occur, regardless of the richness of the associated environments. This is consistent with a recent result of Davis et al. (2022) who studied radio AGN hosted by dwarf galaxies at z < 0.5 by combining HSC and Low-Frequency Array (LOFAR) data. Davis et al. (2022) found that the ignitions of AGN in dwarf galaxies come from selfinteractions, not from interactions with its surroundings.
Our findings imply that massive radio galaxies have experienced galaxy mergers in the past, have already grown up by z > 1.4, and then at z = 0.3−1.4, the mass accretions onto SMBHs have almost ceased, because galaxy merger rate is expected to increase with stellar mass and redshift (e.g., Hopkins et al. 2010). Lessmassive radio galaxies, on the other hand, are expected to have avoided merger events and are beginning to undergo active accretion just at z = 0.3 − 1.4. According to recent simulations (e.g., Bower et al. 2017;Habouzit et al. 2017), the reservoir gas stored in lessmassive galaxy is expected to be easily blown off and carried beyond the host halo by star formation pressures. Such negative feedback prevents the gas from falling into the central SMBH, and causes "galaxy grows first, BH comes later" phase (e.g., Ichikawa et al. 2021). This phase is expected to continue until the stellar mass reaches critical one (M * ∼ 10 10.5 M ⊙ ; Bower et al. 2017;Habouzit et al. 2017). Once the stellar mass approaches to critical one, the negative feedback no longer has enough energy to blow the gas beyond the host halo. Then, the active mass accretion to the central SMBH can begin. Most of the less-massive radio galaxies are expected to have been spared from the environmental effects that promote the rapid growth phase of SMBHs, such as galaxy mergers.
CONCLUSION
We embraced the wide and deep imaging data of HSC-SSP to characterize the redshift evolution of radio galaxy environments between z = 0.3 − 1.4. The radio galaxy sample was extracted from the WERGS data (Yamashita et al. 2018). The redshifts and stellar masses are estimated by Mizuki SED fitting code (Tanaka et al. 2018). The control sample was constructed by matching to the stellar masses and redshifts of the radio galaxies. Multi wavelength data from Toba et al. (2019), for the radio galaxies was used to examine the possible correlation between the surrounding density environments and the AGN states of the radio galaxies. In order to define the surrounding densities around the radio and control galaxies, the k-nearest neighbor method was used. We examined the redshift evolution of the local densities of the radio galaxies, and the possible relations between the densities and the properties such as stellar mass and AGN states of the radio galaxies. Our findings are as follows: 0.12 ± 0.05 0.08 ± 0.02 0.14 ± 0.01 0.21 ± 0.01 0.13 ± 0.06 0.10 ± 0.02 0.10 ± 0.01 0.10 ± 0.01 Figure 8. The projected distances from the radio galaxies (red points) and control galaxies ( • There is no correlation between the overdensities and redshifts of the control galaxies, while the overdensities of the radio galaxies are significantly but weak anti-correlated with redshift.
• In the low stellar mass regime of log M * /M ⊙ < 11, both of the radio and control galaxies, on average, reside in similar environments with the overdensities of ∼ 1. On the other hand, at log M * /M ⊙ > 11, the control galaxies tend to exist in regions with the overdensities higher than unity, while the median density of the radio galaxies is significantly higher than that of the control galaxies.
• The projected distances from the radio galaxies to their nearest neighbors are significantly smaller than those of the control galaxies at the massiveend. On the other hand, no difference in the projected distances is found between the radio and control galaxies at the less-massive end.
• The radio luminosities and sBHAR of the radio galaxies are significantly anti-correlated with their surrounding densities at fixed stellar mass.
Our results support the known scenario where the relative abundance of less-massive HERGs increases with redshift. HERGs tend to reside in low density regions compared to LERGs. In addition, our findings suggest that massive radio galaxies have already matured through galaxy mergers in the past and have SMBHs whose mass accretion almost ceased between z = 0.3 − 1.4, while less-massive radio galaxies undergo active accretion in this epoch and have avoided any merger events. | 8,152.2 | 2022-06-14T00:00:00.000 | [
"Physics"
] |
1 st Workshop on Diversity, Accessibility and Inclusivity in Cyber Security
Diversity, Accessibility and Inclusivity are important considerations for cyber security, to ensure that security mechanisms are suitable for all users, who could have differing abilities as a result of physical or cognitive impairments. One way this can be achieved is by implementing accessible authentication, which is recommended as part of the Web Content Accessibility Guidelines (WCAG 2.1) defined by the World Wide Web Consortium (W3C). There have been recent examples where authentication has presented challenges for people with disabilities (BBC News, 2020). Other considerations include, deriving heuristics during the design of systems, to ensure that the accessibility requirements of potential users are satisfied. This Workshop will investigate the recent developments that aim to ensure cyber security is accessible and inclusive to all users. It will discuss the methods that can be implemented to develop such systems. It will also become a forum for researchers in the domains of accessibility, cyber security, human computer interaction, human centred design and System of Systems. Discussions during the Workshop will lead to the creation of a road map to enhance the diversity, accessibility and inclusivity of cyber security. Accessibility
MOTIVATION
Identifying and specifying system security needs for users is difficult (Cheng and Atlee, 2009), but designing for a specific audience does not promote inclusivity for the wider audience.Mainstream research in security and privacy tends to focus on technical mechanisms and usability (Wang et al., 2018), but does not account for specific needs promoting accessibility.Worldwide there are 500 million people with disabilities, accounting for 15% of the total population (World Bank, 2021).Disability can be classified using the World Health Organization (2001) International Classification for Disability, Functioning and Health Framework (ICF).Kostanjsek (2011) states that disabilities should not characterise individuals, but be seen as "a complex interaction between the person and their environment".It is important to consider accessibility and inclusion to ensure that this user community has equal access to opportunities.This can be achieved through the use of assistive technologies that can "increase, maintain or improve the functional capabilities of persons with disabilities" (Assistive Technology Industry Association, 2021).Dr Whittington (Organising Committee) is an academic representative for the Smart Homes and Independent Living Commission that provides recommendations to support the adoption of smart home technology in social care.
It has been identified that people with disabilities can encounter barriers due to web security and privacy technologies, for example, users with learning disabilities experiencing challenges when following multi-step procedures on websites (World Wide Web Consortium, 2020) and individuals with physical disabilities being unable to authenticate themselves using one time codes over the telephone.This results in poor accessibility and potentially being excluded from purchasing goods, accessing services or communicating with organisations that result in frustration and cancelling of transactions.Accessible authentication mechanisms are recommended by the World Wide Web Consortium (2021) to promote inclusivity of computing to all abilities.
Due to COVID-19, financial services have been hardest hit by cyber security; therefore Financial Technology (FinTech) is the exemplar case study for this Workshop.Global smartphone penetration increased from 33.5% in 2016 to 46.45 % in 202046.45 % in (Statista, 2021)).The ubiquitous nature of this device has provided the rails for inclusive services leveraging the smartphone.The financial services sector has benefitted from this penetration, where FinTech is being leveraged as a tool to provide financial services to over 1.7 billion people globally (Asli et al., 2018).Incumbents in the Financial Services sector have promoted FinTech through Mobile Financial Services to provide innovative products, reach new customer bases and reduce operational costs, while challenger banks are beginning to emerge.Whilst the advent of FinTech has brought financial services closer to a broader range of customers more than ever before, it has introduced the risks of cybercrime to these customers (Shetty, 2018, Stone et al., 2017).Strong technology countermeasures exist, but have not succeeded in minimising the threat, due to human factor concerns (West et al., 2009).To address this problem, there is a need to ensure security controls are usable to end-users and security Application Programming Interfaces are usable to developers and that these controls have considerations for various groups of users, accessibility needs and cognitive limitations.This Workshop will examine various human factor approaches for making security controls in FinTech inclusive, to improve trust and consequently adoption.It follows the organisation of workshops on Human Centred Design for Intelligent Environments (HCD4IE) at British HCI 2016 (Bournemouth University, 2016) and 2018 (WikiCFP, 2021).Bournemouth University have also held two Assistive Technology Symposiums in 2018 and 2019 (BU Research Blog, 2018;2019).
TOPICS
This Workshop aims to explore the application of solutions to promote diversity, inclusivity and accessibility for cyber security systems, to ensure they are suitable for all abilities.It will culminate a range of topics, focused on three themes:
WORKSHOP FORMAT
The Workshop will be a full day event with a Call for Papers.The schedule will be as follows: Mr Srin Madipalli is a technology entrepreneur, executive and investor, who was the Founder/CEO of a startup that exited to Airbnb in 2017.Due to his physical disability, he is involved with advocacy work relating to disability inclusion in the workplace, helping technology companies to increase their accessibility and sharing experiences of being an entrepreneur with a disability.Mr Madipalli is on the Boards of leading advocacy groups in the UK and US, including an Advisory Board member at Included VC, which aims to help people from unrepresented groups find roles in the venture capital sector.
AUDIENCE
The intended audience for this Workshop will be academic and industries involved in the development of accessible cyber security solutions.
Researchers in the domains of Human Computer Interaction, Human Centred Design and System of Systems will also be targeted.
PUBLICATION STRATEGY
Accepted Workshop papers will be included in the proceedings of British HCI 2021 and published in the BCS e-WIC repository and the ACM Digital Library.These papers will also be considered as chapters in a future book publication of proceedings.
ORGANISING COMMITTEE
-15:10 Paper Presentations (Part 2) 15:10 -15:30 Break 15:30 -16:30 Panel Discussion Diversity, Accessibility and Inclusivity in Cyber Security Dogan • Whittington • Apeh • Ki-Aires 3 Keynote Speakers Dr Dave Edyburn (Senior Research Scientist and Professor Emeritus, University of Wisconsin-Milwaukee) research interests focus on the use of technology to enhance teaching, learning and performance.He has authored over 175 articles and book chapters on the use of technology in special education and his work represents a variety of contributions to theory, research and practice.Professor Edyburn is the editor of the Journal of Research on Technology in Education and Past Chair of the AERA Online Teaching and Learning SIG. | 1,530.2 | 2021-07-01T00:00:00.000 | [
"Computer Science"
] |
Common Fixed-Point Theorem and Projection Method on a Hadamard Space
: In this paper, we obtain an equivalent condition to the existence of a common fixed point of a given family of nonexpansive mappings defined on a Hadamard space. Moreover, if the space is bounded, we show that the generating process of the approximate sequence by a specific projection method will stop in finite steps if there is no common fixed point. It is a significant advantage to reveal the nonexistence of a common fixed point in a finite time.
Introduction
The study of fixed points of mappings on complete metric spaces is a central topic in nonlinear analysis, and it has been considered from various perspectives.Many researchers have investigated the existence of fixed points of nonlinear mappings and their approximation techniques.They adopt a subset of Hilbert and Banach spaces as the domain of mappings.One of the most important results is Kirk's fixed-point theorem for a nonexpansive mapping defined on a nonempty bounded closed convex subset of a reflexive Banach space having the normal structure [1].On the other hand, approximation schemes of fixed points have also been actively studied.A nonlinear ergodic theorem by Baillon [2] can be regarded as an approximation scheme of a fixed point of nonexpansive mapping.The convergence of Mann's type [3] iterative scheme to a fixed point of a nonexpansive mapping was proved by Reich [4].Wittmann [5] proved a strong convergence theorem of a Halpern's type [6] of iterative sequence in Hilbert spaces, and it was generalized to Banach spaces by Shioji and Takahashi [7].
In 2004, Kirk [8] proved the following remarkable theorem, a milestone in the history of fixed-point theory on geodesic spaces.
Theorem 1 (Kirk [8]).Let X be a Hadamard space and U a bounded open subset of X.Let T : cl U → X be a nonexpansive mapping.Suppose that there exists p ∈ U such that x / ∈ [p, Tx] \ {Tx} for every boundary point x of U. Then T has a fixed point.
This result is a fixed-point theorem for nonexpansive mapping on a complete geodesic space.After this work, many researchers have studied fixed-point theory in geodesic spaces.In particular, the techniques to approximate a fixed point of given nonexpansive or other types of mappings have been investigated, and they obtained many valuable results.Saejung [9] got the convergence theorem of the iterative sequence generated by the Halpern scheme to the fixed point closest to a given anchor point.He, Fang, López, and Li [10] showed a ∆-convergence theorem of the Mann type iterative sequence.
The shrinking projection method was first proposed by Takahashi, Takeuchi, and Kubota [11].There are many variations in projection methods, and it is one of the most critical schemes among them.For the recent works, see [12,13], for instance.This method Symmetry 2024, 16, 483 2 of 10 has also been studied in the setting of complete geodesic space, and several convergence theorems were proved.Moreover, the following result was recently proved: a modified version of the shrinking projection method in a Hadamard space.
Theorem 2 (Kimura [14]).Let X be a Hadamard space and suppose that a subset {z ∈ X | d(u, z) ≤ d(v, z)} is convex for any u, v ∈ X.Let T : X → X a nonexpansive mapping with Fix T ̸ = ∅.Generate a sequence {x n } ⊂ X as follows: x 1 ∈ X is given, C 1 = X, and We will focus on this method in this study.In a setting such as Banach or Hilbert spaces, some of approximate sequences mentioned above can be used to characterize the existence of a fixed point of a given mapping.In particular, the boundedness of a generated sequence often guarantees the existence of a fixed point; see [15] and references therein.
However, in a practical calculation, it is challenging to show the boundedness of the sequence because we need to calculate infinitely many points to confirm it.
In this paper, we obtain an equivalent condition to the existence of a common fixed point of a family of nonexpansive mappings defined on a Hadamard space by generating an approximate sequence with an iterative process.Moreover, under the assumption that the space is bounded, we show that the generating process of the sequence will stop in finite steps if there is no common fixed point.We emphasize that judging the nonexistence of fixed points in a finite time is a significant advantage.As an application of our results, we also consider a convex minimization problem for a family of convex functions.The results characterize the existence of a common minimizer.We also consider the minimization problem on a given convex subset of the domain of the function.
Preliminaries
Let (X, d) be a metric space.We say γ xy : [0, 1] → X is a geodesic between x, y ∈ X if γ xy (0) = x, γ xy (1) = y, and d(γ xy (s), γ xy (t)) = |s − t|d(x, y) for any s, t ∈ [0, 1].If a geodesic γ xy exists for any x, y ∈ X, then X is called a geodesic space.In particular, X is said to be uniquely geodesic if for any x, y ∈ X, a geodesic between them exists uniquely.In this case, the image of the geodesic γ xy is denoted by [x, y].In a uniquely geodesic space X, the convex combination between two points is naturally defined; for x, y ∈ X and t ∈ [0, 1], we define tx ⊕ (1 Using this notion, we can define the convexity of a subset of X; we say We usually define a CAT(0) space by using notions of geodesic triangles and comparison triangles on a model space.In this paper, we use the following definition which is equivalent to the original one.A uniquely geodesic space X is called CAT(0) space if for any x, y, z ∈ X and t ∈ [0, 1], the inequality holds.For the formal definition, see [16,17] for instance.
A Hadamard space is defined as a complete CAT(0) space.This space includes some essential classes of sets for studying nonlinear mappings and their fixed points, such as closed convex subsets of a Hilbert space, real Hilbert balls, R-trees, and others.Notice that a closed convex subset of a Banach space is not necessarily a Hadamard space.
Let X be a metric space.We say x ∈ X is a fixed point of a mapping T : X → X if it satisfies x = Tx.The set of all fixed points of T is denoted by Fix T, that is, for all x, y ∈ X.It is easy to see that Fix T is always closed and convex if X is a CAT(0) space.
Let X be a Hadamard space and C ⊂ X be a nonempty closed convex subset of X.It is known that, for x ∈ X, there exists a unique Using this point, we define the metric projection P C : X → C by P C x = y x .We also know that P C is a nonexpansive mapping with Fix P C = C.
For a bounded sequence {x n } ⊂ X, we call z an asymptotic center of It is known that the asymptotic center of every bounded sequence in a Hadamard space is unique and it belongs to the closed convex hull of {x n }.
A bounded sequence {x n } is said to be ∆-convergent to x 0 ∈ X if every subsequence of {x n } has an identical asymptotic center x 0 .In a Hadamard space, we know that every bounded sequence has a ∆-convergent subsequence [18].
For more details of Hadamard spaces and related notions, see [17].Let X be a Hadamard space and f : X → ]−∞, ∞].We say f is proper if f (x 0 ) < ∞ for some x 0 ∈ X. f is said to be lower semicontinuous if The set of all minimizers of f on a subset D ⊂ X is denoted by argmin D f .
Common Fixed Point Theorem
We consider the conditions equivalent to the existence of a common fixed point of a family of nonexpansive mappings in a Hadamard space.We begin with the following simple lemma.Lemma 1.Let X be a Hadamard space and let {C n } be a sequence of nonempty closed convex subsets of X which is decreasing with respect to inclusion, that is, C n+1 ⊂ C n for all n ∈ N. Let {y n } ⊂ X be a sequence such that y n ∈ C n for every n ∈ N. If {y n } is bounded, then its asymptotic center belongs to ∞ n=1 C n .
Proof.Suppose that {y n } is bounded, and let y 0 be a unique asymptotic center of {y n }.
Fix k ∈ N arbitrarily.Letting w n = y n+k for n ∈ N, we have a sequence {w n } has the same asymptotic center y 0 as {y n }.From the property of {C n }, it follows that {w n } ⊂ C k .Since C k is closed and convex, we have y 0 ∈ C k .Since k ∈ N is arbitrary, we obtain y 0 ∈ ∞ k=1 C k , the desired result.
The following main result shows that we can characterize the existence of a common fixed point of given nonexpansive mappings by using the generating procedure of its approximate sequence.Theorem 3. Let X be a Hadamard space and suppose that a subset {z ∈ . ., m} be a family of nonexpansive mappings.Generate a sequence {x n } in X with a sequence {C n } of subsets of X by the following steps: Step 0.
(1) if C n+1 ̸ = ∅, then let x n+1 = P C n+1 x n , increment n to 1, and go to Step 1; if C n+1 = ∅, then C k = ∅ and leave x k to be undefined for all k ≥ n + 1, and terminate the generating process.
Then, the following conditions are equivalent: Further, in this case, {x n } is well defined and ∆-convergent to some x 0 ∈ m i=1 Fix T i .
Then, since a metric projection is nonexpansive and p ∈ C n+1 = Fix P C n+1 , we have for all n ∈ N. It follows that a real sequence {d(x n , p)} is convergent to some non-negative number c p ∈ R, and that {x n } is bounded.Let t ∈ ]0, 1[.Since tx n+1 ⊕ (1 − t)p ∈ C n+1 , we have and thus lim n→∞ d(x n , T i x n ) = 0 for every i = 1, 2, . . ., m.
Symmetry 2024, 16, 483 5 of 10 On the other hand, since {x n } is bounded, its asymptotic center is a unique point x 0 ∈ X.For each i = 1, 2, . . ., m, we have lim sup By the uniqueness of the asymptotic center of {x n }, we have T i x 0 = x 0 for every i = 1, 2, . . ., m, and hence We prove this inclusion by induction.It is obvious for the case k = 1.Suppose m i=1 Fix T i ⊂ C k and we consider the case k + 1.Notice that, in this case, x k is defined.Let z ∈ m i=1 Fix T i .Then, since each T i is nonexpansive, we have for each i = 1, 2, . . ., m.This fact and the assumption of induction imply z ∈ C k+1 .Consequently, we obtain and this is the desired result.We now prove the latter part of the theorem.From the argument above, we have obtained the following: {x n } is bounded; • the asymptotic center x 0 of {x n } belongs to m i=1 Fix T i .Let {x n j } be an arbitrary subsequence of {x n }.Since {x n j } is also bounded, there exists a unique asymptotic center y 0 ∈ X.We show that y 0 is identical to the asymptotic center x 0 of {x n }.Since every C n is a closed convex subset of X for n ∈ N, and {C n } is a decreasing sequence with respect to inclusion, by Lemma 1, we have This inequality shows that y 0 is an asymptotic center of {x n }.From its uniqueness, we have This result deals with a finite family of nonexpansive mappings, and we note that it can be generalized to the case of an arbitrary infinite family of mappings.We can change the proof for this case in a trivial way.However, in the view of practical calculations such as computer experiments, it is almost impossible to handle an infinite family of mappings.
The Case That the Underlying Space is Bounded
In this section, we consider the case where the underlying space X is bounded.Notice that we do not assume the boundedness of X in Theorem 3. Thus, in the procedure in the theorem, ∞ k=1 C k might be empty even if every C k is nonempty, as in the following example.
Example 1.Consider the graph of the function f : ]0, ∞[ → R defined by f (x) = log x for x ∈ ]0, ∞[.Then, the tangent line of the curve at the point p = (p 1 , log p 1 ) on the graph intersects with the y-axis at u = (0, −1 + log p 1 ), and the normal line at p intersects with the y-axis at v = (0, p 2 1 + log p 1 ).The midpoint of u and v is w = (0, (p 2 1 − 1)/2 + log p 1 ); see Figure 1.Using this fact, we consider the following procedure.Let T 1 and T 2 be the metric projections onto respectively.If the initial point x 1 lies on the y-axis and we generate the sequence {x n } by the scheme in Theorem 3, then, by symmetry, every x n will be on y-axis if it is defined.Now, we assume that x 1 , x 2 , . . ., x n are defined and lie on the y-axis with descent order; x k+1 lies below x k for k = 1, 2, . . ., n − 1.Let p = T 1 x n = P D 1 x n = (p 1 , log p 1 ).
Then, by symmetry, we have Further, from the calculations above, x n can be expressed by x n = (0, p 2 1 + log x).Since the points x 1 , x 2 , . . ., x n−1 lie above x n , by the simple calculation, we obtain Symmetry 2024, 16, 483 This set forms a cone with the apex at (0, (p 2 1 − 1)/2 + log p 1 ), and therefore we have x n+1 = (0, (p 2 1 − 1)/2 + log p 1 ); see Figure 2. Thus x n+1 is on the y-axis again, and we also have This set forms a cone with the apex at (0, (p 2 1 − 1)/2 + log p 1 ), and therefore we have x n+1 = 187 (0, (p 2 1 − 1)/2 + log p 1 ); see Figure 2. Thus x n+1 is on the y-axis again, and we also have From these facts, the sequence {x n } generated by this procedure with the initial point x 1 = 190 (0, 0) has the following properties: Since every x n is defined and lies on the y-axis with descent order, the second property above implies 194 that From these facts, the sequence {x n } generated by this procedure with the initial point x 1 = (0, 0) has the following properties: for every n ∈ N. Since every x n is defined and lies on the y-axis with descent order, the second property above implies that Suppose that the underlying space X is bounded.In this case, Kirk's fixed-point theorem guarantees that each nonexpansive mapping T i has a fixed point.However, we do not know whether a finite family {T i } of mapping has a common fixed point or not.
The following result shows that we can obtain the non-existence of a common fixed point of {S i } within a finite repeating time.
Theorem 4. Let X be a bounded Hadamard space, and suppose that a subset {z ∈ . ., m} be a family of nonexpansive mappings, and let {x n } be a sequence generated by the process in Theorem 3.Then, the following hold: Proof.(i) is a direct result of Theorem 3.For (ii), we show its contrapositive; we suppose that C n is nonempty for all n ∈ N and obtain m i=1 Fix T i ̸ = ∅.Take a sequence {y n } ⊂ X such that y n ∈ C n for all n ∈ N. Since {y n } is bounded, it follows from Lemma 1 that its unique asymptotic center belongs to ∞ n=1 C n .Thus ∞ n=1 C n is nonempty, and hence m i=1 Fix T i is also nonempty by Theorem 3.This is the desired result.
Applications to a Convex Minimization Problem
In this section, we attempt to apply the results discussed in the previous sections to the problem of finding a common minimizer of a family of convex functions.
Let f : X → ]−∞, ∞] be a proper lower semicontinuous convex function defined on a Hadamard space X.Then, for each x ∈ X, there exists unique y x ∈ X such that Using this point, we define the resolvent R f : X → X of f by R f x = y x .Namely, R f x ∈ X is a unique minimizer of the function g(y) = f (y) + (1/2)d(x, y) 2 .It is known [19] that R f satisfies the following inequality: for x, y ∈ X.Moreover, since the inequality holds for all p, q, r, s ∈ X, we have for x, y ∈ X, and thus R f is nonexpansive.See also [16,20,21].
The resolvent operator has the following important property: the set of minimizers of f is identical to the set of fixed points of R f .From this fact, we can apply our results to find a common minimizer of a given family of convex functions.
Theorem 5. Let X be a Hadamard space and suppose that a subset {z ∈ X | d(u, z) ≤ d(v, z)} of X is convex for any u, v ∈ X.Let { f i : X → ]−∞, ∞] | i = 1, 2, . . ., m} be a family of proper, lower semicontinuous convex functions on X, and R f i : X → X be the resolvent operator of f i for i = 1, 2, . . ., m. Generate a sequence {x n } in X with a sequence {C n } of subsets of X by the following steps: Step 0.
(1) if C n+1 ̸ = ∅, then let x n+1 = P C n+1 x n , increment n to 1, and go to Step 1; (2) if C n+1 = ∅, then C k = ∅ and leave x k to be undefined for all k ≥ n + 1, and terminate the generating process.
Then, the following conditions are equivalent: Further, in this case, {x n } is well defined and ∆-convergent to some x 0 ∈ m i=1 argmin X f i .
Proof.From the properties of a resolvent operator, we have argmin X f i = Fix R f i for every i = 1, 2, . . ., m.Therefore, the condition (a) is equivalent to Applying Theorem 3 with this fact, we have the condition (a) if and only if (b), which is the desired result.The latter part of the theorem is also deduced directly.
Next, we consider the problem of finding a minimizer of a single function f on a given closed convex subset D. If f minimizes at some point x 0 in D, then it is a solution to the
Figure 1 .
Figure 1.The graph of f . | 4,761.4 | 2024-04-16T00:00:00.000 | [
"Mathematics"
] |